Compare commits

...

73 Commits
master ... dev

Author SHA1 Message Date
Bertrand Benjamin 98d9fd4026 Fix: save csv with nice column order 2021-11-27 17:17:18 +01:00
Bertrand Benjamin 28fc41315f Feat: remove prompt commands 2021-11-27 17:16:12 +01:00
Bertrand Benjamin 4b30f39354 Feat: remove pyInquier depedencie 2021-11-26 22:02:19 +01:00
Bertrand Benjamin 58647a734c Fix: column order in score table 2021-11-22 16:33:59 +01:00
Bertrand Benjamin 29f67cfa0c Fix: can't save in create exam 2021-05-11 09:02:09 +02:00
Bertrand Benjamin 1ffdd8676b Merge ssh://git_opytex:/lafrite/recopytex into dev 2021-05-10 11:27:24 +02:00
Bertrand Benjamin 8f2ae96338 Feat: add handling ppre 2021-04-18 09:59:12 +02:00
Bertrand Benjamin a0e94f52b1 Feat: formating questions 2021-04-05 08:31:05 +02:00
Bertrand Benjamin c84f9845b2 Feat: visualisation des Competences et des themes dans students 2021-02-27 10:31:52 +01:00
Bertrand Benjamin d9e95f2186 Feat: return empty fig 2021-02-27 10:03:24 +01:00
Bertrand Benjamin 581b0f4f2f Feat: table des évaluations 2021-02-23 17:55:43 +01:00
Bertrand Benjamin 3dbfc85447 Feat: filter dans store scores 2021-02-23 17:40:18 +01:00
Bertrand Benjamin b5bf1ac137 Feat: add students to paths 2021-02-23 17:07:05 +01:00
Bertrand Benjamin 74d751a586 Feat: update student list 2021-02-23 17:06:55 +01:00
Bertrand Benjamin 1855d4016d Feat: start student_analysis 2021-02-23 16:53:59 +01:00
Bertrand Benjamin ff94470fb4 Feat: Start feedback on eval 2021-02-23 16:14:05 +01:00
Bertrand Benjamin d322452a6e Feat: rename exam-analysis to dashboard 2021-02-23 16:10:16 +01:00
Bertrand Benjamin e1d3940e9d Feat: add total score_rate 2021-02-08 15:45:50 +01:00
Bertrand Benjamin 7dba11996a Feat: formating and split in sections 2021-02-08 15:19:09 +01:00
Bertrand Benjamin 3250a600c9 Feat: start the layout for create_exam 2021-01-27 16:17:44 +01:00
Bertrand Benjamin 589d63ff29 Feat: not showing all columns in bigtable and fixe first columns 2021-01-27 16:16:54 +01:00
Bertrand Benjamin 429fed6a1e Feat: default values for elements 2021-01-24 06:53:06 +01:00
Bertrand Benjamin 1255bf4b9e Fix: remove useless print 2021-01-23 06:54:19 +01:00
Bertrand Benjamin 1fe7665753 Merge branch 'dev' of git_opytex:/lafrite/recopytex into dev 2021-01-22 11:14:34 +01:00
Bertrand Benjamin e08e4a32a8 Feat: exam creation page 2021-01-22 11:13:35 +01:00
Bertrand Benjamin b737612adb Feat: Start display summary 2021-01-22 05:39:14 +01:00
Bertrand Benjamin 9c19e2ac56 Feat: New page with input fields 2021-01-21 22:17:49 +01:00
Bertrand Benjamin eb60734c26 Fix: remove useless import 2021-01-21 22:17:33 +01:00
Bertrand Benjamin 329bcc460c Fix: calculer -> chercher 2021-01-21 22:17:02 +01:00
Bertrand Benjamin 95fc842c1d Feat: 2nd page to create exam 2021-01-21 15:12:24 +01:00
Bertrand Benjamin e0ca1a458b Fix: column id to see student and score_rate 2021-01-21 14:11:39 +01:00
Bertrand Benjamin eb1abbe868 Fix: get back exam graphs 2021-01-21 14:01:57 +01:00
Bertrand Benjamin 412e624791 Merge remote-tracking branch 'origin/dev' into dev 2021-01-21 09:57:33 +01:00
Bertrand Benjamin e8bf0b3f0a Fix: name and bareme in final_score_table and describe rounding 2021-01-21 09:52:49 +01:00
Bertrand Benjamin c057fa11e7 Feat: stop rounding score at 0.5 2021-01-21 09:52:49 +01:00
Bertrand Benjamin e15119605f Merge branch 'dev' of git_opytex:/lafrite/recopytex into dev 2021-01-21 09:38:58 +01:00
Bertrand Benjamin 494567cdb5 Merge branch 'dev' of git_opytex:/lafrite/recopytex into dev 2021-01-21 09:25:58 +01:00
Bertrand Benjamin 84fcee625d Feat: split dashboard 2021-01-20 20:54:59 +01:00
Bertrand Benjamin f62c898162 Fix: remove unecessary import 2021-01-20 20:51:22 +01:00
Bertrand Benjamin 7955b989b4 Fix: missing category (0) in final_score plot 2021-01-17 22:26:16 +01:00
Bertrand Benjamin 4f14e3518c Fix: concatenate index for competence plot 2021-01-17 22:21:58 +01:00
Bertrand Benjamin 4bf8f4003e Feat: remove bootstrap and replace it with css 2021-01-17 22:04:52 +01:00
Bertrand Benjamin a14d47b15c Feat: Clean empty fig 2021-01-15 17:49:30 +01:00
Bertrand Benjamin 09ac9f01f8 Feat: add competence fig and better error management 2021-01-15 13:48:57 +01:00
Bertrand Benjamin 0a5a931d01 Feat: add row to scores_table!! 2021-01-14 21:53:38 +01:00
Bertrand Benjamin 21397272c9 Feat: move dashboard to its own directory 2021-01-14 20:09:25 +01:00
Bertrand Benjamin 894ebc4ec8 Feat: add competence bar plot 2021-01-13 08:28:54 +01:00
Bertrand Benjamin f6bfac4144 Feat: Hist graph and describe 2021-01-12 22:32:26 +01:00
Bertrand Benjamin cfd5928853 Feat: autosave while editing scores 2021-01-12 17:25:58 +01:00
Bertrand Benjamin 8fcad94df4 Feat: start analysis dash board 2021-01-10 20:46:14 +01:00
Bertrand Benjamin 27d7c45980 Feat: add temporary save 2021-01-10 07:21:28 +01:00
Bertrand Benjamin 159e7a9f2e Feat: move exam to Exam class 2021-01-10 06:53:16 +01:00
Bertrand Benjamin 72afb26e2a Fix: indentation 2021-01-10 06:52:56 +01:00
Bertrand Benjamin 6eb918e0f5 Feat: can read exam config from yaml 2021-01-06 09:09:35 +01:00
Bertrand Benjamin 56a669b2be Feat: remove exQty in prompt 2021-01-06 08:53:06 +01:00
Bertrand Benjamin a5f22fc8cd Fix: commentaire -> comment 2021-01-06 07:59:42 +01:00
Bertrand Benjamin 5177df06d7 Fix: element -> row 2021-01-05 09:15:41 +01:00
Bertrand Benjamin d78fcbc281 Feat: add competences 2021-01-05 09:15:24 +01:00
Bertrand Benjamin 98fa768541 format: black formating 2021-01-05 09:14:52 +01:00
Bertrand Benjamin 00c2681823 Fix: element -> row 2021-01-05 09:14:37 +01:00
Bertrand Benjamin 52f2f3f4cf Feat: incoporate cometences config 2021-01-01 18:04:28 +01:00
Bertrand Benjamin 4ea7f8db14 Feat: replace references to PyInquier with prompt_toolkit 2021-01-01 17:47:13 +01:00
Bertrand Benjamin 04a2506d86 Feat: rewrite new_exam prompt without Pyinquier 2020-12-31 18:00:42 +01:00
Bertrand Benjamin 77c358b0c1 Feat: écriture du fichier csv 2020-10-04 18:49:44 +02:00
Bertrand Benjamin 1886deb430 Feat: question prompts 2020-10-04 18:10:43 +02:00
Bertrand Benjamin 5e0f2d92ef Feat: prompt for exercises 2020-10-04 16:38:36 +02:00
Bertrand Benjamin 49cc52f7d1 Feat: prompts and write prompt_exam 2020-10-04 16:11:55 +02:00
Bertrand Benjamin 6d93ef62d7 Feat: split requirements 2020-10-04 16:11:41 +02:00
Bertrand Benjamin 488df4cb0c Feat: start example folder 2020-10-04 15:07:11 +02:00
Bertrand Benjamin 9136f359e0 Feat: add .vim in gitignore 2020-10-04 07:30:21 +02:00
Bertrand Benjamin 1dfee17990 Doc: des explications 2020-10-04 07:29:37 +02:00
Bertrand Benjamin 400fb0a690 FEat: add comments 2020-10-04 07:20:08 +02:00
Bertrand Benjamin 04a1ed9378 Feat: remove versions in requirements 2020-10-04 07:09:18 +02:00
24 changed files with 1848 additions and 66 deletions

4
.gitignore vendored
View File

@ -122,3 +122,7 @@ dmypy.json
# Pyre type checker # Pyre type checker
.pyre/ .pyre/
# vim
.vim

View File

@ -6,3 +6,29 @@ Cette fois ci, on utilise:
- Des fichiers yaml pour les infos sur les élèves - Des fichiers yaml pour les infos sur les élèves
- Des notebooks pour l'analyse - Des notebooks pour l'analyse
- Papermill pour produire les notesbooks à partir de template - Papermill pour produire les notesbooks à partir de template
## Les fichiers CSV
les paramètres sont décris dans ./recopytex/config.py
### Descriptions des questions
- Trimestre
- Nom
- Date
- Exercice
- Question
- Competence
- Domaine
- Commentaire
- Bareme
- Est_nivele
### Valeurs pour notes les élèves
- Score: 0, 1, 2, 3
- Pas de réponses: .
- Absent: a
- Dispensé: (vide)

32
example/recoconfig.yml Normal file
View File

@ -0,0 +1,32 @@
---
source: ./
output: ./
templates: templates/
competences:
Chercher:
name: Chercher
abrv: Cher
Représenter:
name: Représenter
abrv: Rep
Modéliser:
name: Modéliser
abrv: Mod
Raisonner:
name: Raisonner
abrv: Rai
Calculer:
name: Calculer
abrv: Cal
Communiquer:
name: Communiquer
abrv: Com
tribes:
- name: Tribe1
type: Type1
students: tribe1.csv
- name: Tribe2
students: tribe2.csv

21
example/tribe1.csv Normal file
View File

@ -0,0 +1,21 @@
Nom,email
Star Tice,stice0@jalbum.net
Umberto Dingate,udingate1@tumblr.com
Starlin Crangle,scrangle2@wufoo.com
Humbert Bourcq,hbourcq3@g.co
Gabriella Handyside,ghandyside4@patch.com
Stewart Eaves,seaves5@ycombinator.com
Erick Going,egoing6@va.gov
Ase Praton,apraton7@va.gov
Rollins Planks,rplanks8@delicious.com
Dunstan Sarjant,dsarjant9@naver.com
Stacy Guiton,sguitona@themeforest.net
Ange Stanes,astanesb@marriott.com
Amabelle Elleton,aelletonc@squidoo.com
Darn Broomhall,dbroomhalld@cisco.com
Dyan Chatto,dchattoe@npr.org
Keane Rennebach,krennebachf@dot.gov
Nari Paulton,npaultong@gov.uk
Brandy Wase,bwaseh@ftc.gov
Jaclyn Firidolfi,jfiridolfii@reuters.com
Violette Lockney,vlockneyj@chron.com
1 Nom email
2 Star Tice stice0@jalbum.net
3 Umberto Dingate udingate1@tumblr.com
4 Starlin Crangle scrangle2@wufoo.com
5 Humbert Bourcq hbourcq3@g.co
6 Gabriella Handyside ghandyside4@patch.com
7 Stewart Eaves seaves5@ycombinator.com
8 Erick Going egoing6@va.gov
9 Ase Praton apraton7@va.gov
10 Rollins Planks rplanks8@delicious.com
11 Dunstan Sarjant dsarjant9@naver.com
12 Stacy Guiton sguitona@themeforest.net
13 Ange Stanes astanesb@marriott.com
14 Amabelle Elleton aelletonc@squidoo.com
15 Darn Broomhall dbroomhalld@cisco.com
16 Dyan Chatto dchattoe@npr.org
17 Keane Rennebach krennebachf@dot.gov
18 Nari Paulton npaultong@gov.uk
19 Brandy Wase bwaseh@ftc.gov
20 Jaclyn Firidolfi jfiridolfii@reuters.com
21 Violette Lockney vlockneyj@chron.com

21
example/tribe2.csv Normal file
View File

@ -0,0 +1,21 @@
Nom,email
Elle McKintosh,emckintosh0@1und1.de
Ty Megany,tmegany1@reuters.com
Pippa Borrows,pborrows2@a8.net
Sonny Eskrick,seskrick3@123-reg.co.uk
Mollee Britch,mbritch4@usda.gov
Ingram Plaistowe,iplaistowe5@purevolume.com
Fay Vanyard,fvanyard6@sbwire.com
Nancy Rase,nrase7@omniture.com
Rachael Ruxton,rruxton8@bravesites.com
Tallie Rushmer,trushmer9@home.pl
Seward MacIlhagga,smacilhaggaa@hatena.ne.jp
Lizette Searl,lsearlb@list-manage.com
Talya Mannagh,tmannaghc@webnode.com
Jordan Witherbed,jwitherbedd@unesco.org
Reagan Botcherby,rbotcherbye@scientificamerican.com
Libbie Shoulder,lshoulderf@desdev.cn
Abner Khomich,akhomichg@youtube.com
Zollie Kitman,zkitmanh@forbes.com
Fiorenze Durden,fdurdeni@feedburner.com
Kevyn Race,kracej@seattletimes.com
1 Nom email
2 Elle McKintosh emckintosh0@1und1.de
3 Ty Megany tmegany1@reuters.com
4 Pippa Borrows pborrows2@a8.net
5 Sonny Eskrick seskrick3@123-reg.co.uk
6 Mollee Britch mbritch4@usda.gov
7 Ingram Plaistowe iplaistowe5@purevolume.com
8 Fay Vanyard fvanyard6@sbwire.com
9 Nancy Rase nrase7@omniture.com
10 Rachael Ruxton rruxton8@bravesites.com
11 Tallie Rushmer trushmer9@home.pl
12 Seward MacIlhagga smacilhaggaa@hatena.ne.jp
13 Lizette Searl lsearlb@list-manage.com
14 Talya Mannagh tmannaghc@webnode.com
15 Jordan Witherbed jwitherbedd@unesco.org
16 Reagan Botcherby rbotcherbye@scientificamerican.com
17 Libbie Shoulder lshoulderf@desdev.cn
18 Abner Khomich akhomichg@youtube.com
19 Zollie Kitman zkitmanh@forbes.com
20 Fiorenze Durden fdurdeni@feedburner.com
21 Kevyn Race kracej@seattletimes.com

View File

@ -17,7 +17,7 @@ def try_replace(x, old, new):
def extract_students(df, no_student_columns=NO_ST_COLUMNS.values()): def extract_students(df, no_student_columns=NO_ST_COLUMNS.values()):
""" Extract the list of students from df """Extract the list of students from df
:param df: the dataframe :param df: the dataframe
:param no_student_columns: columns that are not students :param no_student_columns: columns that are not students
@ -30,7 +30,7 @@ def extract_students(df, no_student_columns=NO_ST_COLUMNS.values()):
def flat_df_students( def flat_df_students(
df, no_student_columns=NO_ST_COLUMNS.values(), postprocessing=True df, no_student_columns=NO_ST_COLUMNS.values(), postprocessing=True
): ):
""" Flat the dataframe by returning a dataframe with on student on each line """Flat the dataframe by returning a dataframe with on student on each line
:param df: the dataframe (one row per questions) :param df: the dataframe (one row per questions)
:param no_student_columns: columns that are not students :param no_student_columns: columns that are not students
@ -63,7 +63,7 @@ def flat_df_students(
def flat_df_for( def flat_df_for(
df, student, no_student_columns=NO_ST_COLUMNS.values(), postprocessing=True df, student, no_student_columns=NO_ST_COLUMNS.values(), postprocessing=True
): ):
""" Extract the data only for one student """Extract the data only for one student
:param df: the dataframe (one row per questions) :param df: the dataframe (one row per questions)
:param no_student_columns: columns that are not students :param no_student_columns: columns that are not students
@ -88,7 +88,7 @@ def flat_df_for(
def postprocess(df): def postprocess(df):
""" Postprocessing score dataframe """Postprocessing score dataframe
- Replace na with an empty string - Replace na with an empty string
- Replace "NOANSWER" with -1 - Replace "NOANSWER" with -1

View File

View File

@ -0,0 +1,5 @@
import dash
app = dash.Dash(__name__, suppress_callback_exceptions=True)
# app = dash.Dash(__name__)
server = app.server

View File

@ -0,0 +1,66 @@
body {
margin: 0px;
font-family: 'Source Sans Pro','Roboto','Open Sans','Liberation Sans','DejaVu Sans','Verdana','Helvetica','Arial',sans-serif;
}
header {
margin: 0px 0px 20px 0px;
background-color: #333333;
color: #ffffff;
padding: 20px;
}
header > h1 {
margin: 0px;
}
main {
width: 95vw;
margin: auto;
}
section {
margin-top: 20px;
margin-bottom: 20px;
}
/* Exam analysis */
#select {
margin-bottom: 20px;
}
#select > div {
width: 40vw;
margin: auto;
}
#analysis {
display: flex;
flex-flow: row wrap;
}
#analysis > * {
display: flex;
flex-flow: column;
width: 45vw;
margin: auto;
}
/* Create new exam */
#new-exam {
display: flex;
flex-flow: row;
justify-content: space-between;
}
#new-exam label {
width: 20%;
display: flex;
flex-flow: column;
justify-content: space-between;
}

View File

@ -0,0 +1,355 @@
#!/usr/bin/env python
# encoding: utf-8
import dash
import dash_html_components as html
import dash_core_components as dcc
import dash_table
import plotly.graph_objects as go
from datetime import date, datetime
import uuid
import pandas as pd
import yaml
from ...scripts.getconfig import config
from ...config import NO_ST_COLUMNS
from ..app import app
from ...scripts.exam import Exam
QUESTION_COLUMNS = [
{"id": "id", "name": "Question"},
{
"id": "competence",
"name": "Competence",
"presentation": "dropdown",
},
{"id": "theme", "name": "Domaine"},
{"id": "comment", "name": "Commentaire"},
{"id": "score_rate", "name": "Bareme"},
{"id": "is_leveled", "name": "Est_nivele"},
]
def get_current_year_limit():
today = date.today()
if today.month > 8:
return {
"min_date_allowed": date(today.year, 9, 1),
"max_date_allowed": date(today.year + 1, 7, 15),
"initial_visible_month": today,
}
return {
"min_date_allowed": date(today.year - 1, 9, 1),
"max_date_allowed": date(today.year, 7, 15),
"initial_visible_month": today,
}
layout = html.Div(
[
html.Header(
children=[
html.H1("Création d'une évaluation"),
html.P("Pas encore de sauvegarde", id="is-saved"),
html.Button("Enregistrer dans csv", id="save-csv"),
],
),
html.Main(
children=[
html.Section(
children=[
html.Form(
id="new-exam",
children=[
html.Label(
children=[
"Classe",
dcc.Dropdown(
id="tribe",
options=[
{"label": t["name"], "value": t["name"]}
for t in config["tribes"]
],
value=config["tribes"][0]["name"],
),
]
),
html.Label(
children=[
"Nom de l'évaluation",
dcc.Input(
id="exam_name",
type="text",
placeholder="Nom de l'évaluation",
),
]
),
html.Label(
children=[
"Date",
dcc.DatePickerSingle(
id="date",
date=date.today(),
**get_current_year_limit(),
),
]
),
html.Label(
children=[
"Trimestre",
dcc.Dropdown(
id="term",
options=[
{"label": i + 1, "value": i + 1}
for i in range(3)
],
value=1,
),
]
),
],
),
],
id="form",
),
html.Section(
children=[
html.Div(
id="exercises",
children=[],
),
html.Button(
"Ajouter un exercice",
id="add-exercise",
className="add-exercise",
),
html.Div(
id="summary",
),
],
id="exercises",
),
html.Section(
children=[
html.Div(
id="score_rate",
),
html.Div(
id="exercises-viz",
),
html.Div(
id="competences-viz",
),
html.Div(
id="themes-viz",
),
],
id="visualisation",
),
]
),
dcc.Store(id="exam_store"),
]
)
@app.callback(
dash.dependencies.Output("exercises", "children"),
dash.dependencies.Input("add-exercise", "n_clicks"),
dash.dependencies.State("exercises", "children"),
)
def add_exercise(n_clicks, children):
if n_clicks is None:
return children
element_table = pd.DataFrame(columns=[c["id"] for c in QUESTION_COLUMNS])
element_table = element_table.append(
pd.Series(
data={
"id": 1,
"competence": "Rechercher",
"theme": "",
"comment": "",
"score_rate": 1,
"is_leveled": 1,
},
name=0,
)
)
new_exercise = html.Div(
children=[
html.Div(
children=[
dcc.Input(
id={"type": "exercice", "index": str(n_clicks)},
type="text",
value=f"Exercice {len(children)+1}",
placeholder="Nom de l'exercice",
className="exercise-name",
),
html.Button(
"X",
id={"type": "rm_exercice", "index": str(n_clicks)},
className="delete-exercise",
),
],
className="exercise-head",
),
dash_table.DataTable(
id={"type": "elements", "index": str(n_clicks)},
columns=QUESTION_COLUMNS,
data=element_table.to_dict("records"),
editable=True,
row_deletable=True,
dropdown={
"competence": {
"options": [
{"label": i, "value": i} for i in config["competences"]
]
},
},
style_cell={
"whiteSpace": "normal",
"height": "auto",
},
),
html.Button(
"Ajouter un élément de notation",
id={"type": "add-element", "index": str(n_clicks)},
className="add-element",
),
],
className="exercise",
id=f"exercise-{n_clicks}",
)
children.append(new_exercise)
return children
@app.callback(
dash.dependencies.Output(
{"type": "elements", "index": dash.dependencies.MATCH}, "data"
),
dash.dependencies.Input(
{"type": "add-element", "index": dash.dependencies.MATCH}, "n_clicks"
),
[
dash.dependencies.State(
{"type": "elements", "index": dash.dependencies.MATCH}, "data"
),
],
prevent_initial_call=True,
)
def add_element(n_clicks, elements):
if n_clicks is None or n_clicks < len(elements):
return elements
df = pd.DataFrame.from_records(elements)
df = df.append(
pd.Series(
data={
"id": len(df) + 1,
"competence": "",
"theme": "",
"comment": "",
"score_rate": 1,
"is_leveled": 1,
},
name=n_clicks,
)
)
return df.to_dict("records")
def exam_generalities(tribe, exam_name, date, term, exercices=[], elements=[]):
return [
html.H1(f"{exam_name} pour les {tribe}"),
html.P(f"Fait le {date} (Trimestre {term})"),
]
def exercise_summary(identifier, name, elements=[]):
df = pd.DataFrame.from_records(elements)
return html.Div(
[
html.H2(name),
dash_table.DataTable(
columns=[{"id": c, "name": c} for c in df], data=elements
),
]
)
@app.callback(
dash.dependencies.Output("exam_store", "data"),
[
dash.dependencies.Input("tribe", "value"),
dash.dependencies.Input("exam_name", "value"),
dash.dependencies.Input("date", "date"),
dash.dependencies.Input("term", "value"),
dash.dependencies.Input(
{"type": "exercice", "index": dash.dependencies.ALL}, "value"
),
dash.dependencies.Input(
{"type": "elements", "index": dash.dependencies.ALL}, "data"
),
],
dash.dependencies.State({"type": "elements", "index": dash.dependencies.ALL}, "id"),
)
def store_exam(tribe, exam_name, date, term, exercices, elements, elements_id):
exam = Exam(exam_name, tribe, date, term)
for (i, name) in enumerate(exercices):
ex_elements_id = [el for el in elements_id if el["index"] == str(i + 1)][0]
index = elements_id.index(ex_elements_id)
ex_elements = elements[index]
exam.add_exercise(name, ex_elements)
return exam.to_dict()
@app.callback(
dash.dependencies.Output("score_rate", "children"),
dash.dependencies.Input("exam_store", "data"),
prevent_initial_call=True,
)
def score_rate(data):
exam = Exam(**data)
return [html.P(f"Barème /{exam.score_rate}")]
@app.callback(
dash.dependencies.Output("competences-viz", "figure"),
dash.dependencies.Input("exam_store", "data"),
prevent_initial_call=True,
)
def competences_viz(data):
exam = Exam(**data)
return [html.P(str(exam.competences_rate))]
@app.callback(
dash.dependencies.Output("themes-viz", "children"),
dash.dependencies.Input("exam_store", "data"),
prevent_initial_call=True,
)
def themes_viz(data):
exam = Exam(**data)
themes_rate = exam.themes_rate
fig = go.Figure()
if themes_rate:
fig.add_trace(go.Pie(labels=list(themes_rate.keys()), values=list(themes_rate.values())))
return [dcc.Graph(figure=fig)]
return []
@app.callback(
dash.dependencies.Output("is-saved", "children"),
dash.dependencies.Input("save-csv", "n_clicks"),
dash.dependencies.State("exam_store", "data"),
prevent_initial_call=True,
)
def save_to_csv(n_clicks, data):
exam = Exam(**data)
csv = exam.path(".csv")
exam.write_csv()
return [f"Dernière sauvegarde {datetime.today()} dans {csv}"]

View File

@ -0,0 +1,406 @@
#!/usr/bin/env python
# encoding: utf-8
import dash
import dash_html_components as html
import dash_core_components as dcc
import dash_table
from dash.exceptions import PreventUpdate
import plotly.graph_objects as go
from pathlib import Path
from datetime import datetime
import pandas as pd
import numpy as np
from ... import flat_df_students, pp_q_scores
from ...config import NO_ST_COLUMNS
from ...scripts.getconfig import config
from ..app import app
COLORS = {
".": "black",
0: "#E7472B",
1: "#FF712B",
2: "#F2EC4C",
3: "#68D42F",
}
layout = html.Div(
children=[
html.Header(
children=[
html.H1("Analyse des notes"),
html.P("Dernière sauvegarde", id="lastsave"),
],
),
html.Main(
[
html.Section(
[
html.Div(
[
"Classe: ",
dcc.Dropdown(
id="tribe",
options=[
{"label": t["name"], "value": t["name"]}
for t in config["tribes"]
],
value=config["tribes"][0]["name"],
),
],
style={
"display": "flex",
"flex-flow": "column",
},
),
html.Div(
[
"Evaluation: ",
dcc.Dropdown(id="csv"),
],
style={
"display": "flex",
"flex-flow": "column",
},
),
],
id="select",
style={
"display": "flex",
"flex-flow": "row wrap",
},
),
html.Div(
[
html.Div(
dash_table.DataTable(
id="final_score_table",
columns=[
{"id": "Eleve", "name": "Élève"},
{"id": "Note", "name": "Note"},
{"id": "Bareme", "name": "Barème"},
],
data=[],
style_data_conditional=[
{
"if": {"row_index": "odd"},
"backgroundColor": "rgb(248, 248, 248)",
}
],
style_data={
"width": "100px",
"maxWidth": "100px",
"minWidth": "100px",
},
),
id="final_score_table_container",
),
html.Div(
[
dash_table.DataTable(
id="final_score_describe",
columns=[
{"id": "count", "name": "count"},
{"id": "mean", "name": "mean"},
{"id": "std", "name": "std"},
{"id": "min", "name": "min"},
{"id": "25%", "name": "25%"},
{"id": "50%", "name": "50%"},
{"id": "75%", "name": "75%"},
{"id": "max", "name": "max"},
],
),
dcc.Graph(
id="fig_assessment_hist",
),
dcc.Graph(id="fig_competences"),
],
id="desc_plots",
),
],
id="analysis",
),
html.Div(
[
dash_table.DataTable(
id="scores_table",
columns=[
{"id": "id", "name": "Question"},
{
"id": "competence",
"name": "Competence",
},
{"id": "theme", "name": "Domaine"},
{"id": "comment", "name": "Commentaire"},
{"id": "score_rate", "name": "Bareme"},
{"id": "is_leveled", "name": "Est_nivele"},
],
style_cell={
"whiteSpace": "normal",
"height": "auto",
},
fixed_columns={"headers": True, "data": 7},
style_table={"minWidth": "100%"},
style_data_conditional=[],
editable=True,
),
html.Button("Ajouter un élément", id="btn_add_element"),
],
id="big_table",
),
dcc.Store(id="final_score"),
],
className="content",
style={
"width": "95vw",
"margin": "auto",
},
),
],
)
@app.callback(
[
dash.dependencies.Output("csv", "options"),
dash.dependencies.Output("csv", "value"),
],
[dash.dependencies.Input("tribe", "value")],
)
def update_csvs(value):
if not value:
raise PreventUpdate
p = Path(value)
csvs = list(p.glob("*.csv"))
try:
return [{"label": str(c), "value": str(c)} for c in csvs], str(csvs[0])
except IndexError:
return []
@app.callback(
[
dash.dependencies.Output("final_score", "data"),
],
[dash.dependencies.Input("scores_table", "data")],
)
def update_final_scores(data):
if not data:
raise PreventUpdate
scores = pd.DataFrame.from_records(data)
try:
if scores.iloc[0]["Commentaire"] == "commentaire" or scores.iloc[0].str.contains("PPRE").any():
scores.drop([0], inplace=True)
except KeyError:
pass
scores = flat_df_students(scores).dropna(subset=["Score"])
if scores.empty:
return [{}]
scores = pp_q_scores(scores)
assessment_scores = scores.groupby(["Eleve"]).agg({"Note": "sum", "Bareme": "sum"})
return [assessment_scores.reset_index().to_dict("records")]
@app.callback(
[
dash.dependencies.Output("final_score_table", "data"),
],
[dash.dependencies.Input("final_score", "data")],
)
def update_final_scores_table(data):
assessment_scores = pd.DataFrame.from_records(data)
return [assessment_scores.to_dict("records")]
@app.callback(
[
dash.dependencies.Output("final_score_describe", "data"),
],
[dash.dependencies.Input("final_score", "data")],
)
def update_final_scores_descr(data):
scores = pd.DataFrame.from_records(data)
if scores.empty:
return [[{}]]
desc = scores["Note"].describe().T.round(2)
return [[desc.to_dict()]]
@app.callback(
[
dash.dependencies.Output("fig_assessment_hist", "figure"),
],
[dash.dependencies.Input("final_score", "data")],
)
def update_final_scores_hist(data):
assessment_scores = pd.DataFrame.from_records(data)
if assessment_scores.empty:
return [go.Figure(data=[go.Scatter(x=[], y=[])])]
ranges = np.linspace(
-0.5,
assessment_scores.Bareme.max(),
int(assessment_scores.Bareme.max() * 2 + 2),
)
bins = pd.cut(assessment_scores["Note"], ranges)
assessment_scores["Bin"] = bins
assessment_grouped = (
assessment_scores.reset_index()
.groupby("Bin")
.agg({"Bareme": "count", "Eleve": lambda x: "\n".join(x)})
)
assessment_grouped.index = assessment_grouped.index.map(lambda i: i.right)
fig = go.Figure()
fig.add_bar(
x=assessment_grouped.index,
y=assessment_grouped.Bareme,
text=assessment_grouped.Eleve,
textposition="auto",
hovertemplate="",
marker_color="#4E89DE",
)
fig.update_layout(
height=300,
margin=dict(l=5, r=5, b=5, t=5),
)
return [fig]
@app.callback(
[
dash.dependencies.Output("fig_competences", "figure"),
],
[dash.dependencies.Input("scores_table", "data")],
)
def update_competence_fig(data):
scores = pd.DataFrame.from_records(data)
try:
if scores.iloc[0]["Commentaire"] == "commentaire" or scores.iloc[0].str.contains("PPRE").any():
scores.drop([0], inplace=True)
except KeyError:
pass
scores = flat_df_students(scores).dropna(subset=["Score"])
if scores.empty:
return [go.Figure(data=[go.Scatter(x=[], y=[])])]
scores = pp_q_scores(scores)
pt = pd.pivot_table(
scores,
index=["Exercice", "Question", "Commentaire"],
columns="Score",
aggfunc="size",
fill_value=0,
)
for i in {i for i in pt.index.get_level_values(0)}:
pt.loc[(str(i), "", ""), :] = ""
pt.sort_index(inplace=True)
index = (
pt.index.get_level_values(0).map(str)
+ ":"
+ pt.index.get_level_values(1).map(str)
+ " "
+ pt.index.get_level_values(2).map(str)
)
fig = go.Figure()
bars = [
{"score": -1, "name": "Pas de réponse", "color": COLORS["."]},
{"score": 0, "name": "Faux", "color": COLORS[0]},
{"score": 1, "name": "Peu juste", "color": COLORS[1]},
{"score": 2, "name": "Presque juste", "color": COLORS[2]},
{"score": 3, "name": "Juste", "color": COLORS[3]},
]
for b in bars:
try:
fig.add_bar(
x=index, y=pt[b["score"]], name=b["name"], marker_color=b["color"]
)
except KeyError:
pass
fig.update_layout(barmode="relative")
fig.update_layout(
height=500,
margin=dict(l=5, r=5, b=5, t=5),
)
return [fig]
@app.callback(
[
dash.dependencies.Output("lastsave", "children"),
],
[
dash.dependencies.Input("scores_table", "data"),
dash.dependencies.State("csv", "value"),
],
)
def save_scores(data, csv):
try:
scores = pd.DataFrame.from_records(data)
scores = scores_table_column_order(scores)
scores.to_csv(csv, index=False)
except:
return [f"Soucis pour sauvegarder à {datetime.today()} dans {csv}"]
else:
return [f"Dernière sauvegarde {datetime.today()} dans {csv}"]
def highlight_value(df):
""" Cells style """
hight = []
for v, color in COLORS.items():
hight += [
{
"if": {"filter_query": "{{{}}} = {}".format(col, v), "column_id": col},
"backgroundColor": color,
"color": "white",
}
for col in df.columns
if col not in NO_ST_COLUMNS.values()
]
return hight
def scores_table_column_order(df):
df_student_columns = [c for c in df.columns if c not in NO_ST_COLUMNS.values()]
order = list(NO_ST_COLUMNS.values())+df_student_columns
return df.loc[:, order]
@app.callback(
[
dash.dependencies.Output("scores_table", "columns"),
dash.dependencies.Output("scores_table", "data"),
dash.dependencies.Output("scores_table", "style_data_conditional"),
],
[
dash.dependencies.Input("csv", "value"),
dash.dependencies.Input("btn_add_element", "n_clicks"),
dash.dependencies.State("scores_table", "data"),
],
)
def update_scores_table(csv, add_element, data):
ctx = dash.callback_context
if ctx.triggered[0]["prop_id"] == "csv.value":
stack = pd.read_csv(csv, encoding="UTF8")
elif ctx.triggered[0]["prop_id"] == "btn_add_element.n_clicks":
stack = pd.DataFrame.from_records(data)
infos = pd.DataFrame.from_records(
[{k: stack.iloc[-1][k] for k in NO_ST_COLUMNS.values()}]
)
stack = stack.append(infos)
stack = scores_table_column_order(stack)
return (
[
{"id": c, "name": c}
for c in stack.columns
if c not in ["Trimestre", "Nom", "Date"]
],
stack.to_dict("records"),
highlight_value(stack),
)

View File

@ -0,0 +1,29 @@
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output
from .app import app
from .exam_analysis import app as exam_analysis
from .create_exam import app as create_exam
from .student_analysis import app as student_analysis
app.layout = html.Div(
[dcc.Location(id="url", refresh=False), html.Div(id="page-content")]
)
@app.callback(Output("page-content", "children"), Input("url", "pathname"))
def display_page(pathname):
if pathname == "/":
return exam_analysis.layout
elif pathname == "/create-exam":
return create_exam.layout
elif pathname == "/students":
return student_analysis.layout
else:
return "404"
if __name__ == "__main__":
app.run_server(debug=True)

View File

@ -0,0 +1,300 @@
#!/usr/bin/env python
# encoding: utf-8
import dash
import dash_html_components as html
import dash_core_components as dcc
import dash_table
import plotly.graph_objects as go
from datetime import date, datetime
import uuid
import pandas as pd
import yaml
from pathlib import Path
from ...scripts.getconfig import config
from ... import flat_df_students, pp_q_scores
from ...config import NO_ST_COLUMNS
from ..app import app
from ...scripts.exam import Exam
def get_students(csv):
return list(pd.read_csv(csv).T.to_dict().values())
COLORS = {
".": "black",
0: "#E7472B",
1: "#FF712B",
2: "#F2EC4C",
3: "#68D42F",
}
QUESTION_COLUMNS = [
{"id": "id", "name": "Question"},
{
"id": "competence",
"name": "Competence",
"presentation": "dropdown",
},
{"id": "theme", "name": "Domaine"},
{"id": "comment", "name": "Commentaire"},
{"id": "score_rate", "name": "Bareme"},
{"id": "is_leveled", "name": "Est_nivele"},
]
layout = html.Div(
[
html.Header(
children=[
html.H1("Bilan des élèves"),
],
),
html.Main(
children=[
html.Section(
children=[
html.Form(
id="select-student",
children=[
html.Label(
children=[
"Classe",
dcc.Dropdown(
id="tribe",
options=[
{"label": t["name"], "value": t["name"]}
for t in config["tribes"]
],
value=config["tribes"][0]["name"],
),
]
),
html.Label(
children=[
"Élève",
dcc.Dropdown(
id="student",
options=[
{"label": t["Nom"], "value": t["Nom"]}
for t in get_students(config["tribes"][0]["students"])
],
value=get_students(config["tribes"][0]["students"])[0]["Nom"],
),
]
),
html.Label(
children=[
"Trimestre",
dcc.Dropdown(
id="term",
options=[
{"label": i + 1, "value": i + 1}
for i in range(3)
],
value=1,
),
]
),
],
),
],
id="form",
),
html.Section(
children=[
html.H2("Évaluations"),
html.Div(
dash_table.DataTable(
id="exam_scores",
columns=[
{"id": "Nom", "name": "Évaluations"},
{"id": "Note", "name": "Note"},
{"id": "Bareme", "name": "Barème"},
],
data=[],
style_data_conditional=[
{
"if": {"row_index": "odd"},
"backgroundColor": "rgb(248, 248, 248)",
}
],
style_data={
"width": "100px",
"maxWidth": "100px",
"minWidth": "100px",
},
),
id="eval-table",
),
],
id="Évaluations",
),
html.Section(
children=[
html.Div(
id="competences-viz",
),
html.Div(
id="themes-vizz",
),
],
id="visualisation",
),
]
),
dcc.Store(id="student-scores"),
]
)
@app.callback(
[
dash.dependencies.Output("student", "options"),
dash.dependencies.Output("student", "value"),
],
[
dash.dependencies.Input("tribe", "value")
],)
def update_students_list(tribe):
tribe_config = [t for t in config["tribes"] if t["name"] == tribe][0]
students = get_students(tribe_config["students"])
options = [
{"label": t["Nom"], "value": t["Nom"]}
for t in students
]
value = students[0]["Nom"]
return options, value
@app.callback(
[
dash.dependencies.Output("student-scores", "data"),
],
[
dash.dependencies.Input("tribe", "value"),
dash.dependencies.Input("student", "value"),
dash.dependencies.Input("term", "value"),
],
)
def update_student_scores(tribe, student, term):
tribe_config = [t for t in config["tribes"] if t["name"] == tribe][0]
p = Path(tribe_config["name"])
csvs = list(p.glob("*.csv"))
dfs = []
for csv in csvs:
try:
scores = pd.read_csv(csv)
except pd.errors.ParserError:
pass
else:
if scores.iloc[0]["Commentaire"] == "commentaire" or scores.iloc[0].str.contains("PPRE").any():
scores.drop([0], inplace=True)
scores = flat_df_students(scores).dropna(subset=["Score"])
scores = scores[scores["Eleve"] == student]
scores = scores[scores["Trimestre"] == term]
dfs.append(scores)
df = pd.concat(dfs)
return [df.to_dict("records")]
@app.callback(
[
dash.dependencies.Output("exam_scores", "data"),
],
[
dash.dependencies.Input("student-scores", "data"),
],
)
def update_exam_scores(data):
scores = pd.DataFrame.from_records(data)
scores = pp_q_scores(scores)
assessment_scores = scores.groupby(["Nom"]).agg({"Note": "sum", "Bareme": "sum"})
return [assessment_scores.reset_index().to_dict("records")]
@app.callback(
[
dash.dependencies.Output("competences-viz", "children"),
],
[
dash.dependencies.Input("student-scores", "data"),
],
)
def update_competences_viz(data):
scores = pd.DataFrame.from_records(data)
scores = pp_q_scores(scores)
pt = pd.pivot_table(
scores,
index=["Competence"],
columns="Score",
aggfunc="size",
fill_value=0,
)
fig = go.Figure()
bars = [
{"score": -1, "name": "Pas de réponse", "color": COLORS["."]},
{"score": 0, "name": "Faux", "color": COLORS[0]},
{"score": 1, "name": "Peu juste", "color": COLORS[1]},
{"score": 2, "name": "Presque juste", "color": COLORS[2]},
{"score": 3, "name": "Juste", "color": COLORS[3]},
]
for b in bars:
try:
fig.add_bar(
x=list(config["competences"].keys()), y=pt[b["score"]], name=b["name"], marker_color=b["color"]
)
except KeyError:
pass
fig.update_layout(barmode="relative")
fig.update_layout(
height=500,
margin=dict(l=5, r=5, b=5, t=5),
)
return [dcc.Graph(figure=fig)]
@app.callback(
[
dash.dependencies.Output("themes-vizz", "children"),
],
[
dash.dependencies.Input("student-scores", "data"),
],
)
def update_themes_viz(data):
scores = pd.DataFrame.from_records(data)
scores = pp_q_scores(scores)
pt = pd.pivot_table(
scores,
index=["Domaine"],
columns="Score",
aggfunc="size",
fill_value=0,
)
fig = go.Figure()
bars = [
{"score": -1, "name": "Pas de réponse", "color": COLORS["."]},
{"score": 0, "name": "Faux", "color": COLORS[0]},
{"score": 1, "name": "Peu juste", "color": COLORS[1]},
{"score": 2, "name": "Presque juste", "color": COLORS[2]},
{"score": 3, "name": "Juste", "color": COLORS[3]},
]
for b in bars:
try:
fig.add_bar(
x=list(pt.index), y=pt[b["score"]], name=b["name"], marker_color=b["color"]
)
except KeyError:
pass
fig.update_layout(barmode="relative")
fig.update_layout(
height=500,
margin=dict(l=5, r=5, b=5, t=5),
)
return [dcc.Graph(figure=fig)]

View File

@ -4,9 +4,11 @@
import pandas as pd import pandas as pd
import numpy as np import numpy as np
from math import ceil, floor from math import ceil, floor
from .config import COLUMNS, VALIDSCORE from .config import COLUMNS
# Values manipulations """
Functions for manipulate score dataframes
"""
def round_half_point(val): def round_half_point(val):
@ -19,12 +21,13 @@ def round_half_point(val):
def score_to_mark(x): def score_to_mark(x):
""" Compute the mark """Compute the mark
if the item is leveled then the score is multiply by the score_rate if the item is leveled then the score is multiply by the score_rate
otherwise it copies the score otherwise it copies the score
:param x: dictionnary with COLUMNS["is_leveled"], COLUMNS["score"] and COLUMNS["score_rate"] keys :param x: dictionnary with COLUMNS["is_leveled"], COLUMNS["score"] and COLUMNS["score_rate"] keys
:return: the mark
>>> d = {"Eleve":["E1"]*6 + ["E2"]*6, >>> d = {"Eleve":["E1"]*6 + ["E2"]*6,
... COLUMNS["score_rate"]:[1]*2+[2]*2+[2]*2 + [1]*2+[2]*2+[2]*2, ... COLUMNS["score_rate"]:[1]*2+[2]*2+[2]*2 + [1]*2+[2]*2+[2]*2,
@ -43,8 +46,9 @@ def score_to_mark(x):
if x[COLUMNS["is_leveled"]]: if x[COLUMNS["is_leveled"]]:
if x[COLUMNS["score"]] not in [0, 1, 2, 3]: if x[COLUMNS["score"]] not in [0, 1, 2, 3]:
raise ValueError(f"The evaluation is out of range: {x[COLUMNS['score']]} at {x}") raise ValueError(
#return round_half_point(x[COLUMNS["score"]] * x[COLUMNS["score_rate"]] / 3) f"The evaluation is out of range: {x[COLUMNS['score']]} at {x}"
)
return round(x[COLUMNS["score"]] * x[COLUMNS["score_rate"]] / 3, 2) return round(x[COLUMNS["score"]] * x[COLUMNS["score_rate"]] / 3, 2)
if x[COLUMNS["score"]] > x[COLUMNS["score_rate"]]: if x[COLUMNS["score"]] > x[COLUMNS["score_rate"]]:
@ -55,9 +59,10 @@ def score_to_mark(x):
def score_to_level(x): def score_to_level(x):
""" Compute the level (".",0,1,2,3). """Compute the level (".",0,1,2,3).
:param x: dictionnary with COLUMNS["is_leveled"], COLUMNS["score"] and COLUMNS["score_rate"] keys :param x: dictionnary with COLUMNS["is_leveled"], COLUMNS["score"] and COLUMNS["score_rate"] keys
:return: the level
>>> d = {"Eleve":["E1"]*6 + ["E2"]*6, >>> d = {"Eleve":["E1"]*6 + ["E2"]*6,
... COLUMNS["score_rate"]:[1]*2+[2]*2+[2]*2 + [1]*2+[2]*2+[2]*2, ... COLUMNS["score_rate"]:[1]*2+[2]*2+[2]*2 + [1]*2+[2]*2+[2]*2,
@ -92,7 +97,9 @@ def score_to_level(x):
def compute_mark(df): def compute_mark(df):
""" Add Mark column to df """Compute the mark for the dataframe
apply score_to_mark to each row
:param df: DataFrame with COLUMNS["score"], COLUMNS["is_leveled"] and COLUMNS["score_rate"] columns. :param df: DataFrame with COLUMNS["score"], COLUMNS["is_leveled"] and COLUMNS["score_rate"] columns.
@ -123,9 +130,12 @@ def compute_mark(df):
def compute_level(df): def compute_level(df):
""" Add Mark column to df """Compute level for the dataframe
Applies score_to_level to each row
:param df: DataFrame with COLUMNS["score"], COLUMNS["is_leveled"] and COLUMNS["score_rate"] columns. :param df: DataFrame with COLUMNS["score"], COLUMNS["is_leveled"] and COLUMNS["score_rate"] columns.
:return: Columns with level
>>> d = {"Eleve":["E1"]*6 + ["E2"]*6, >>> d = {"Eleve":["E1"]*6 + ["E2"]*6,
... COLUMNS["score_rate"]:[1]*2+[2]*2+[2]*2 + [1]*2+[2]*2+[2]*2, ... COLUMNS["score_rate"]:[1]*2+[2]*2+[2]*2 + [1]*2+[2]*2+[2]*2,
@ -154,9 +164,10 @@ def compute_level(df):
def compute_normalized(df): def compute_normalized(df):
""" Compute the normalized mark (Mark / score_rate) """Compute the normalized mark (Mark / score_rate)
:param df: DataFrame with "Mark" and COLUMNS["score_rate"] columns :param df: DataFrame with "Mark" and COLUMNS["score_rate"] columns
:return: column with normalized mark
>>> d = {"Eleve":["E1"]*6 + ["E2"]*6, >>> d = {"Eleve":["E1"]*6 + ["E2"]*6,
... COLUMNS["score_rate"]:[1]*2+[2]*2+[2]*2 + [1]*2+[2]*2+[2]*2, ... COLUMNS["score_rate"]:[1]*2+[2]*2+[2]*2 + [1]*2+[2]*2+[2]*2,
@ -187,7 +198,9 @@ def compute_normalized(df):
def pp_q_scores(df): def pp_q_scores(df):
""" Postprocessing questions scores dataframe """Postprocessing questions scores dataframe
Add 3 columns: mark, level and normalized
:param df: questions-scores dataframe :param df: questions-scores dataframe
:return: same data frame with mark, level and normalize columns :return: same data frame with mark, level and normalize columns

211
recopytex/scripts/exam.py Normal file
View File

@ -0,0 +1,211 @@
#!/usr/bin/env python
# encoding: utf-8
from datetime import datetime
from pathlib import Path
# from prompt_toolkit import HTML
from ..config import NO_ST_COLUMNS
import pandas as pd
import yaml
from .getconfig import config
def try_parsing_date(text, formats=["%Y-%m-%d", "%Y.%m.%d", "%Y/%m/%d"]):
for fmt in formats:
try:
return datetime.strptime(text[:10], fmt)
except ValueError:
pass
raise ValueError("no valid date format found")
def format_question(question):
question["score_rate"] = float(question["score_rate"])
return question
class Exam:
def __init__(self, name, tribename, date, term, **kwrds):
self._name = name
self._tribename = tribename
self._date = try_parsing_date(date)
self._term = term
try:
kwrds["exercices"]
except KeyError:
self._exercises = {}
else:
self._exercises = kwrds["exercices"]
@property
def name(self):
return self._name
@property
def tribename(self):
return self._tribename
@property
def date(self):
return self._date
@property
def term(self):
return self._term
def add_exercise(self, name, questions):
"""Add key with questions in ._exercises"""
try:
self._exercises[name]
except KeyError:
self._exercises[name] = [
format_question(question) for question in questions
]
else:
raise KeyError("The exercise already exsists. Use modify_exercise")
def modify_exercise(self, name, questions, append=False):
"""Modify questions of an exercise
If append==True, add questions to the exercise questions
"""
try:
self._exercises[name]
except KeyError:
raise KeyError("The exercise already exsists. Use modify_exercise")
else:
if append:
self._exercises[name] += format_question(questions)
else:
self._exercises[name] = format_question(questions)
@property
def exercices(self):
return self._exercises
@property
def tribe_path(self):
return Path(config["source"]) / self.tribename
@property
def tribe_student_path(self):
return (
Path(config["source"])
/ [t["students"] for t in config["tribes"] if t["name"] == self.tribename][
0
]
)
@property
def long_name(self):
"""Get exam name with date inside"""
return f"{self.date.strftime('%y%m%d')}_{self.name}"
def path(self, extention=""):
return self.tribe_path / (self.long_name + extention)
def to_dict(self):
return {
"name": self.name,
"tribename": self.tribename,
"date": self.date,
"term": self.term,
"exercices": self.exercices,
}
def to_row(self):
rows = []
for ex, questions in self.exercices.items():
for q in questions:
rows.append(
{
"term": self.term,
"assessment": self.name,
"date": self.date.strftime("%d/%m/%Y"),
"exercise": ex,
"question": q["id"],
**q,
}
)
return rows
@property
def themes(self):
themes = set()
for questions in self._exercises.values():
themes.update([q["theme"] for q in questions])
return themes
def display_exercise(self, name):
pass
def display(self, name):
pass
def write_yaml(self):
print(f"Sauvegarde temporaire dans {self.path('.yml')}")
self.tribe_path.mkdir(exist_ok=True)
with open(self.path(".yml"), "w") as f:
f.write(yaml.dump(self.to_dict()))
def write_csv(self):
rows = self.to_row()
print(rows)
base_df = pd.DataFrame.from_dict(rows)[NO_ST_COLUMNS.keys()]
base_df.rename(columns=NO_ST_COLUMNS, inplace=True)
students = pd.read_csv(self.tribe_student_path)["Nom"]
for student in students:
base_df[student] = ""
self.tribe_path.mkdir(exist_ok=True)
base_df.to_csv(self.path(".csv"), index=False)
@property
def score_rate(self):
total = 0
for ex, questions in self._exercises.items():
total += sum([q["score_rate"] for q in questions])
return total
@property
def competences_rate(self):
"""Dictionnary with competences as key and total rate as value"""
rates = {}
for ex, questions in self._exercises.items():
for q in questions:
try:
q["competence"]
except KeyError:
pass
else:
try:
rates[q["competence"]] += q["score_rate"]
except KeyError:
rates[q["competence"]] = q["score_rate"]
return rates
@property
def themes_rate(self):
"""Dictionnary with themes as key and total rate as value"""
rates = {}
for ex, questions in self._exercises.items():
for q in questions:
try:
q["theme"]
except KeyError:
pass
else:
if q["theme"]:
try:
rates[q["theme"]] += q["score_rate"]
except KeyError:
rates[q["theme"]] = q["score_rate"]
return rates

View File

@ -0,0 +1,9 @@
#!/usr/bin/env python
# encoding: utf-8
import yaml
CONFIGPATH = "recoconfig.yml"
with open(CONFIGPATH, "r") as config:
config = yaml.load(config, Loader=yaml.FullLoader)

View File

@ -0,0 +1,233 @@
#!/usr/bin/env python
# encoding: utf-8
from prompt_toolkit import prompt, HTML, ANSI
from prompt_toolkit import print_formatted_text as print
from prompt_toolkit.styles import Style
from prompt_toolkit.validation import Validator
from prompt_toolkit.completion import WordCompleter
from unidecode import unidecode
from datetime import datetime
from functools import wraps
import sys
from .getconfig import config
VALIDATE = [
"o",
"ok",
"OK",
"oui",
"OUI",
"yes",
"YES",
]
REFUSE = ["n", "non", "NON", "no", "NO"]
CANCEL = ["a", "annuler"]
STYLE = Style.from_dict(
{
"": "#93A1A1",
"validation": "#884444",
"appending": "#448844",
}
)
class CancelError(Exception):
pass
def prompt_validate(question, cancelable=False, empty_means=1, style="validation"):
"""Prompt for validation
:param question: Text to print to ask the question.
:param cancelable: enable cancel answer
:param empty_means: result for no answer
:return:
0 -> Refuse
1 -> Validate
-1 -> cancel
"""
question_ = question
choices = VALIDATE + REFUSE
if cancelable:
question_ += "(a ou annuler pour sortir)"
choices += CANCEL
ans = prompt(
[
(f"class:{style}", question_),
],
completer=WordCompleter(choices),
style=STYLE,
).lower()
if ans == "":
return empty_means
if ans in VALIDATE:
return 1
if cancelable and ans in CANCEL:
return -1
return 0
def prompt_until_validate(question="C'est ok? ", cancelable=False):
def decorator(func):
@wraps(func)
def wrapper(*args, **kwrd):
ans = func(*args, **kwrd)
confirm = prompt_validate(question, cancelable)
if confirm == -1:
raise CancelError
while not confirm:
sys.stdout.flush()
ans = func(*args, **ans, **kwrd)
confirm = prompt_validate(question, cancelable)
if confirm == -1:
raise CancelError
return ans
return wrapper
return decorator
@prompt_until_validate()
def prompt_exam(**kwrd):
""" Prompt questions to edit an exam """
print(HTML("<b>Nouvelle évaluation</b>"))
exam = {}
exam["name"] = prompt("Nom de l'évaluation: ", default=kwrd.get("name", "DS"))
tribes_name = [t["name"] for t in config["tribes"]]
exam["tribename"] = prompt(
"Nom de la classe: ",
default=kwrd.get("tribename", ""),
completer=WordCompleter(tribes_name),
validator=Validator.from_callable(lambda x: x in tribes_name),
)
exam["tribe"] = [t for t in config["tribes"] if t["name"] == exam["tribename"]][0]
exam["date"] = prompt(
"Date de l'évaluation (%y%m%d): ",
default=kwrd.get("date", datetime.today()).strftime("%y%m%d"),
validator=Validator.from_callable(lambda x: (len(x) == 6) and x.isdigit()),
)
exam["date"] = datetime.strptime(exam["date"], "%y%m%d")
exam["term"] = prompt(
"Trimestre: ",
validator=Validator.from_callable(lambda x: x.isdigit()),
default=kwrd.get("term", "1"),
)
return exam
@prompt_until_validate()
def prompt_exercise(number=1, completer={}, **kwrd):
exercise = {}
try:
kwrd["name"]
except KeyError:
print(HTML("<b>Nouvel exercice</b>"))
exercise["name"] = prompt(
"Nom de l'exercice: ", default=kwrd.get("name", f"Exercice {number}")
)
else:
print(HTML(f"<b>Modification de l'exercice: {kwrd['name']}</b>"))
exercise["name"] = kwrd["name"]
exercise["questions"] = []
try:
kwrd["questions"][0]
except KeyError:
last_question_id = "1a"
except IndexError:
last_question_id = "1a"
else:
for ques in kwrd["questions"]:
try:
exercise["questions"].append(
prompt_question(completer=completer, **ques)
)
except CancelError:
print("Cette question a été supprimée")
last_question_id = exercise["questions"][-1]["id"]
appending = prompt_validate(
question="Ajouter un élément de notation? ", style="appending"
)
while appending:
try:
exercise["questions"].append(
prompt_question(last_question_id, completer=completer)
)
except CancelError:
print("Cette question a été supprimée")
else:
last_question_id = exercise["questions"][-1]["id"]
appending = prompt_validate(
question="Ajouter un élément de notation? ", style="appending"
)
return exercise
@prompt_until_validate(cancelable=True)
def prompt_question(last_question_id="1a", completer={}, **kwrd):
try:
kwrd["id"]
except KeyError:
print(HTML("<b>Nouvel élément de notation</b>"))
else:
print(
HTML(f"<b>Modification de l'élément {kwrd['id']} ({kwrd['comment']})</b>")
)
question = {}
question["id"] = prompt(
"Identifiant de la question: ",
default=kwrd.get("id", "1a"),
)
question["competence"] = prompt(
"Competence: ",
default=kwrd.get("competence", list(config["competences"].keys())[0]),
completer=WordCompleter(config["competences"].keys()),
validator=Validator.from_callable(lambda x: x in config["competences"].keys()),
)
question["theme"] = prompt(
"Domaine: ",
default=kwrd.get("theme", ""),
completer=WordCompleter(completer.get("theme", [])),
)
question["comment"] = prompt(
"Commentaire: ",
default=kwrd.get("comment", ""),
)
question["is_leveled"] = prompt(
"Évaluation par niveau: ",
default=kwrd.get("is_leveled", "1"),
# validate
)
question["score_rate"] = prompt(
"Barème: ",
default=kwrd.get("score_rate", "1"),
# validate
)
return question

View File

@ -3,13 +3,16 @@
import click import click
from pathlib import Path from pathlib import Path
import yaml
import sys import sys
import papermill as pm import papermill as pm
import pandas as pd
from datetime import datetime from datetime import datetime
import yaml
from .prepare_csv import prepare_csv from .getconfig import config, CONFIGPATH
from .config import config from ..config import NO_ST_COLUMNS
from .exam import Exam
from ..dashboard.index import app as dash
@click.group() @click.group()
@ -24,8 +27,33 @@ def print_config():
click.echo(config) click.echo(config)
def reporting(csv_file): @cli.command()
# csv_file = Path(csv_file) def setup():
"""Setup the environnement using recoconfig.yml"""
for tribe in config["tribes"]:
Path(tribe["name"]).mkdir(exist_ok=True)
if not Path(tribe["students"]).exists():
print(f"The file {tribe['students']} does not exists")
@cli.command()
@click.option("--debug", default=0, help="Debug mode for dash")
def dashboard(debug):
dash.run_server(debug=bool(debug))
@cli.command()
@click.argument("csv_file")
def report(csv_file):
csv = Path(csv_file)
if not csv.exists():
click.echo(f"{csv_file} does not exists")
sys.exit(1)
if csv.suffix != ".csv":
click.echo(f"{csv_file} has to be a csv file")
sys.exit(1)
csv_file = Path(csv_file)
tribe_dir = csv_file.parent tribe_dir = csv_file.parent
csv_filename = csv_file.name.split(".")[0] csv_filename = csv_file.name.split(".")[0]
@ -54,49 +82,3 @@ def reporting(csv_file):
csv_file=str(csv_file.absolute()), csv_file=str(csv_file.absolute()),
), ),
) )
@cli.command()
@click.argument("target", required=False)
def report(target=""):
""" Make a report for the eval
:param target: csv file or a directory where csvs are
"""
try:
if target.endswith(".csv"):
csv = Path(target)
if not csv.exists():
click.echo(f"{target} does not exists")
sys.exit(1)
if csv.suffix != ".csv":
click.echo(f"{target} has to be a csv file")
sys.exit(1)
csvs = [csv]
else:
csvs = list(Path(target).glob("**/*.csv"))
except AttributeError:
csvs = list(Path(config["source"]).glob("**/*.csv"))
for csv in csvs:
click.echo(f"Processing {csv}")
try:
reporting(csv)
except pm.exceptions.PapermillExecutionError as e:
click.echo(f"Error with {csv}: {e}")
@cli.command()
def prepare():
""" Prepare csv file """
items = prepare_csv()
click.echo(items)
@cli.command()
@click.argument("tribe")
def random_pick(tribe):
""" Randomly pick a student """
pass

View File

@ -1,3 +1,4 @@
prompt_toolkit
ansiwrap==0.8.4 ansiwrap==0.8.4
appdirs==1.4.3 appdirs==1.4.3
attrs==19.1.0 attrs==19.1.0

69
requirements_dev.txt Normal file
View File

@ -0,0 +1,69 @@
ansiwrap
attrs
backcall
bleach
certifi
chardet
Click
colorama
cycler
decorator
defusedxml
entrypoints
future
idna
importlib-resources
ipykernel
ipython
ipython-genutils
ipywidgets
jedi
Jinja2
jsonschema
jupyter
jupyter-client
jupyter-console
jupyter-core
jupytex
kiwisolver
MarkupSafe
matplotlib
mistune
nbconvert
nbformat
notebook
numpy
pandas
pandocfilters
papermill
parso
pexpect
pickleshare
prometheus-client
prompt-toolkit
ptyprocess
Pygments
pyparsing
pyrsistent
python-dateutil
pytz
PyYAML
pyzmq
qtconsole
-e git+git_opytex:/lafrite/recopytex.git@e9a8310f151ead60434ae944d726a2fd22b23d06#egg=Recopytex
requests
scipy
seaborn
Send2Trash
six
tenacity
terminado
testpath
textwrap3
tornado
tqdm
traitlets
urllib3
wcwidth
webencodings
widgetsnbextension

View File

@ -17,7 +17,6 @@ setup(
'numpy', 'numpy',
'papermill', 'papermill',
'pyyaml', 'pyyaml',
'PyInquirer',
], ],
entry_points=''' entry_points='''
[console_scripts] [console_scripts]