Compare commits

29 Commits

Author SHA1 Message Date
0a5a931d01 Feat: add row to scores_table!! 2021-01-14 21:53:38 +01:00
21397272c9 Feat: move dashboard to its own directory 2021-01-14 20:09:25 +01:00
894ebc4ec8 Feat: add competence bar plot 2021-01-13 08:28:54 +01:00
f6bfac4144 Feat: Hist graph and describe 2021-01-12 22:32:26 +01:00
cfd5928853 Feat: autosave while editing scores 2021-01-12 17:25:58 +01:00
8fcad94df4 Feat: start analysis dash board 2021-01-10 20:46:14 +01:00
27d7c45980 Feat: add temporary save 2021-01-10 07:21:28 +01:00
159e7a9f2e Feat: move exam to Exam class 2021-01-10 06:53:16 +01:00
72afb26e2a Fix: indentation 2021-01-10 06:52:56 +01:00
6eb918e0f5 Feat: can read exam config from yaml 2021-01-06 09:09:35 +01:00
56a669b2be Feat: remove exQty in prompt 2021-01-06 08:53:06 +01:00
a5f22fc8cd Fix: commentaire -> comment 2021-01-06 07:59:42 +01:00
5177df06d7 Fix: element -> row 2021-01-05 09:15:41 +01:00
d78fcbc281 Feat: add competences 2021-01-05 09:15:24 +01:00
98fa768541 format: black formating 2021-01-05 09:14:52 +01:00
00c2681823 Fix: element -> row 2021-01-05 09:14:37 +01:00
52f2f3f4cf Feat: incoporate cometences config 2021-01-01 18:04:28 +01:00
4ea7f8db14 Feat: replace references to PyInquier with prompt_toolkit 2021-01-01 17:47:13 +01:00
04a2506d86 Feat: rewrite new_exam prompt without Pyinquier 2020-12-31 18:00:42 +01:00
77c358b0c1 Feat: écriture du fichier csv 2020-10-04 18:49:44 +02:00
1886deb430 Feat: question prompts 2020-10-04 18:10:43 +02:00
5e0f2d92ef Feat: prompt for exercises 2020-10-04 16:38:36 +02:00
49cc52f7d1 Feat: prompts and write prompt_exam 2020-10-04 16:11:55 +02:00
6d93ef62d7 Feat: split requirements 2020-10-04 16:11:41 +02:00
488df4cb0c Feat: start example folder 2020-10-04 15:07:11 +02:00
9136f359e0 Feat: add .vim in gitignore 2020-10-04 07:30:21 +02:00
1dfee17990 Doc: des explications 2020-10-04 07:29:37 +02:00
400fb0a690 FEat: add comments 2020-10-04 07:20:08 +02:00
04a1ed9378 Feat: remove versions in requirements 2020-10-04 07:09:18 +02:00
19 changed files with 1005 additions and 320 deletions

4
.gitignore vendored
View File

@@ -122,3 +122,7 @@ dmypy.json
# Pyre type checker
.pyre/
# vim
.vim

View File

@@ -6,3 +6,29 @@ Cette fois ci, on utilise:
- Des fichiers yaml pour les infos sur les élèves
- Des notebooks pour l'analyse
- Papermill pour produire les notesbooks à partir de template
## Les fichiers CSV
les paramètres sont décris dans ./recopytex/config.py
### Descriptions des questions
- Trimestre
- Nom
- Date
- Exercice
- Question
- Competence
- Domaine
- Commentaire
- Bareme
- Est_nivele
### Valeurs pour notes les élèves
- Score: 0, 1, 2, 3
- Pas de réponses: .
- Absent: a
- Dispensé: (vide)

32
example/recoconfig.yml Normal file
View File

@@ -0,0 +1,32 @@
---
source: ./
output: ./
templates: templates/
competences:
Calculer:
name: Calculer
abrv: Cal
Représenter:
name: Représenter
abrv: Rep
Modéliser:
name: Modéliser
abrv: Mod
Raisonner:
name: Raisonner
abrv: Rai
Calculer:
name: Calculer
abrv: Cal
Communiquer:
name: Communiquer
abrv: Com
tribes:
- name: Tribe1
type: Type1
students: tribe1.csv
- name: Tribe2
students: tribe2.csv

21
example/tribe1.csv Normal file
View File

@@ -0,0 +1,21 @@
Nom,email
Star Tice,stice0@jalbum.net
Umberto Dingate,udingate1@tumblr.com
Starlin Crangle,scrangle2@wufoo.com
Humbert Bourcq,hbourcq3@g.co
Gabriella Handyside,ghandyside4@patch.com
Stewart Eaves,seaves5@ycombinator.com
Erick Going,egoing6@va.gov
Ase Praton,apraton7@va.gov
Rollins Planks,rplanks8@delicious.com
Dunstan Sarjant,dsarjant9@naver.com
Stacy Guiton,sguitona@themeforest.net
Ange Stanes,astanesb@marriott.com
Amabelle Elleton,aelletonc@squidoo.com
Darn Broomhall,dbroomhalld@cisco.com
Dyan Chatto,dchattoe@npr.org
Keane Rennebach,krennebachf@dot.gov
Nari Paulton,npaultong@gov.uk
Brandy Wase,bwaseh@ftc.gov
Jaclyn Firidolfi,jfiridolfii@reuters.com
Violette Lockney,vlockneyj@chron.com
1 Nom email
2 Star Tice stice0@jalbum.net
3 Umberto Dingate udingate1@tumblr.com
4 Starlin Crangle scrangle2@wufoo.com
5 Humbert Bourcq hbourcq3@g.co
6 Gabriella Handyside ghandyside4@patch.com
7 Stewart Eaves seaves5@ycombinator.com
8 Erick Going egoing6@va.gov
9 Ase Praton apraton7@va.gov
10 Rollins Planks rplanks8@delicious.com
11 Dunstan Sarjant dsarjant9@naver.com
12 Stacy Guiton sguitona@themeforest.net
13 Ange Stanes astanesb@marriott.com
14 Amabelle Elleton aelletonc@squidoo.com
15 Darn Broomhall dbroomhalld@cisco.com
16 Dyan Chatto dchattoe@npr.org
17 Keane Rennebach krennebachf@dot.gov
18 Nari Paulton npaultong@gov.uk
19 Brandy Wase bwaseh@ftc.gov
20 Jaclyn Firidolfi jfiridolfii@reuters.com
21 Violette Lockney vlockneyj@chron.com

21
example/tribe2.csv Normal file
View File

@@ -0,0 +1,21 @@
Nom,email
Elle McKintosh,emckintosh0@1und1.de
Ty Megany,tmegany1@reuters.com
Pippa Borrows,pborrows2@a8.net
Sonny Eskrick,seskrick3@123-reg.co.uk
Mollee Britch,mbritch4@usda.gov
Ingram Plaistowe,iplaistowe5@purevolume.com
Fay Vanyard,fvanyard6@sbwire.com
Nancy Rase,nrase7@omniture.com
Rachael Ruxton,rruxton8@bravesites.com
Tallie Rushmer,trushmer9@home.pl
Seward MacIlhagga,smacilhaggaa@hatena.ne.jp
Lizette Searl,lsearlb@list-manage.com
Talya Mannagh,tmannaghc@webnode.com
Jordan Witherbed,jwitherbedd@unesco.org
Reagan Botcherby,rbotcherbye@scientificamerican.com
Libbie Shoulder,lshoulderf@desdev.cn
Abner Khomich,akhomichg@youtube.com
Zollie Kitman,zkitmanh@forbes.com
Fiorenze Durden,fdurdeni@feedburner.com
Kevyn Race,kracej@seattletimes.com
1 Nom email
2 Elle McKintosh emckintosh0@1und1.de
3 Ty Megany tmegany1@reuters.com
4 Pippa Borrows pborrows2@a8.net
5 Sonny Eskrick seskrick3@123-reg.co.uk
6 Mollee Britch mbritch4@usda.gov
7 Ingram Plaistowe iplaistowe5@purevolume.com
8 Fay Vanyard fvanyard6@sbwire.com
9 Nancy Rase nrase7@omniture.com
10 Rachael Ruxton rruxton8@bravesites.com
11 Tallie Rushmer trushmer9@home.pl
12 Seward MacIlhagga smacilhaggaa@hatena.ne.jp
13 Lizette Searl lsearlb@list-manage.com
14 Talya Mannagh tmannaghc@webnode.com
15 Jordan Witherbed jwitherbedd@unesco.org
16 Reagan Botcherby rbotcherbye@scientificamerican.com
17 Libbie Shoulder lshoulderf@desdev.cn
18 Abner Khomich akhomichg@youtube.com
19 Zollie Kitman zkitmanh@forbes.com
20 Fiorenze Durden fdurdeni@feedburner.com
21 Kevyn Race kracej@seattletimes.com

View File

@@ -2,16 +2,16 @@
# encoding: utf-8
NO_ST_COLUMNS = {
"assessment": "Nom",
"term": "Trimestre",
"assessment": "Nom",
"date": "Date",
"exercise": "Exercice",
"question": "Question",
"competence": "Competence",
"theme": "Domaine",
"comment": "Commentaire",
"is_leveled": "Est_nivele",
"score_rate": "Bareme",
"is_leveled": "Est_nivele",
}
COLUMNS = {

View File

@@ -17,7 +17,7 @@ def try_replace(x, old, new):
def extract_students(df, no_student_columns=NO_ST_COLUMNS.values()):
""" Extract the list of students from df
"""Extract the list of students from df
:param df: the dataframe
:param no_student_columns: columns that are not students
@@ -30,7 +30,7 @@ def extract_students(df, no_student_columns=NO_ST_COLUMNS.values()):
def flat_df_students(
df, no_student_columns=NO_ST_COLUMNS.values(), postprocessing=True
):
""" Flat the dataframe by returning a dataframe with on student on each line
"""Flat the dataframe by returning a dataframe with on student on each line
:param df: the dataframe (one row per questions)
:param no_student_columns: columns that are not students
@@ -63,7 +63,7 @@ def flat_df_students(
def flat_df_for(
df, student, no_student_columns=NO_ST_COLUMNS.values(), postprocessing=True
):
""" Extract the data only for one student
"""Extract the data only for one student
:param df: the dataframe (one row per questions)
:param no_student_columns: columns that are not students
@@ -88,7 +88,7 @@ def flat_df_for(
def postprocess(df):
""" Postprocessing score dataframe
"""Postprocessing score dataframe
- Replace na with an empty string
- Replace "NOANSWER" with -1

View File

341
recopytex/dashboard/exam.py Normal file
View File

@@ -0,0 +1,341 @@
#!/usr/bin/env python
# encoding: utf-8
import dash
import dash_html_components as html
import dash_core_components as dcc
import dash_table
from dash.exceptions import PreventUpdate
import plotly.graph_objects as go
from pathlib import Path
from datetime import datetime
import pandas as pd
import numpy as np
import dash_bootstrap_components as dbc
from .. import flat_df_students, pp_q_scores
from ..config import NO_ST_COLUMNS
from ..scripts.getconfig import config, CONFIGPATH
COLORS = {
".": "black",
0: "#E7472B",
1: "#FF712B",
2: "#F2EC4C",
3: "#68D42F",
}
app = dash.Dash(external_stylesheets=[dbc.themes.SIMPLEX])
# external_stylesheets = ["https://codepen.io/chriddyp/pen/bWLwgP.css"]
# app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
# app = dash.Dash(__name__)
app.layout = html.Div(
children=[
dbc.NavbarSimple(
children=[
dbc.Alert("Dernière sauvegarde", id="lastsave", color="success"),
],
brand="Analyse des notes",
brand_href="#",
color="success",
dark=True,
),
html.Br(),
dbc.Row(
[
dbc.Col(
[
"Classe: ",
dbc.Select(
id="tribe",
options=[
{"label": t["name"], "value": t["name"]}
for t in config["tribes"]
],
value=config["tribes"][0]["name"],
),
]
),
dbc.Col(
[
"Evaluation: ",
dbc.Select(id="csv"),
]
),
],
),
html.Br(),
dbc.Row(
[
dbc.Col(
dash_table.DataTable(
id="final_score_table",
columns=[
{"id": "Élève", "name": "Élève"},
{"id": "Note", "name": "Note"},
{"id": "Barème", "name": "Bareme"},
],
data=[],
style_data_conditional=[
{
"if": {"row_index": "odd"},
"backgroundColor": "rgb(248, 248, 248)",
}
],
style_header={
"backgroundColor": "rgb(230, 230, 230)",
"fontWeight": "bold",
},
style_data={
"width": "100px",
"maxWidth": "100px",
"minWidth": "100px",
},
)
),
dbc.Col(
[
dash_table.DataTable(
id="final_score_describe",
),
dcc.Graph(
id="fig_assessment_hist",
),
# dcc.Graph(id="fig_competences"),
]
),
],
),
html.Br(),
html.Div(
[
dash_table.DataTable(
id="scores_table",
columns=[{"id": c, "name": c} for c in NO_ST_COLUMNS.values()],
style_cell={
"whiteSpace": "normal",
"height": "auto",
},
style_data_conditional=[],
editable=True,
),
dbc.Button("Ajouter un élément", id="btn_add_element"),
]
),
dcc.Store(id="final_score"),
]
)
@app.callback(
[
dash.dependencies.Output("csv", "options"),
dash.dependencies.Output("csv", "value"),
],
[dash.dependencies.Input("tribe", "value")],
)
def update_csvs(value):
if not value:
raise PreventUpdate
p = Path(value)
csvs = list(p.glob("*.csv"))
try:
return [{"label": str(c), "value": str(c)} for c in csvs], str(csvs[0])
except IndexError:
return []
@app.callback(
[
dash.dependencies.Output("final_score", "data"),
],
[dash.dependencies.Input("scores_table", "data")],
)
def update_final_scores(data):
if not data:
raise PreventUpdate
try:
scores = pd.DataFrame.from_records(data)
scores = flat_df_students(scores).dropna(subset=["Score"])
scores = pp_q_scores(scores)
assessment_scores = scores.groupby(["Eleve"]).agg(
{"Note": "sum", "Bareme": "sum"}
)
return [assessment_scores.reset_index().to_dict("records")]
except KeyError:
raise PreventUpdate
@app.callback(
[
dash.dependencies.Output("final_score_table", "columns"),
dash.dependencies.Output("final_score_table", "data"),
],
[dash.dependencies.Input("final_score", "data")],
)
def update_final_scores_table(data):
assessment_scores = pd.DataFrame.from_records(data)
return [
{"id": c, "name": c} for c in assessment_scores.columns
], assessment_scores.to_dict("records")
@app.callback(
[
dash.dependencies.Output("final_score_describe", "columns"),
dash.dependencies.Output("final_score_describe", "data"),
],
[dash.dependencies.Input("final_score", "data")],
)
def update_final_scores_descr(data):
desc = pd.DataFrame.from_records(data)["Note"].describe()
return [{"id": c, "name": c} for c in desc.keys()], [desc.to_dict()]
@app.callback(
[
dash.dependencies.Output("fig_assessment_hist", "figure"),
],
[dash.dependencies.Input("final_score", "data")],
)
def update_final_scores_hist(data):
assessment_scores = pd.DataFrame.from_records(data)
ranges = np.linspace(
0, assessment_scores.Bareme.max(), int(assessment_scores.Bareme.max() * 2 + 1)
)
bins = pd.cut(assessment_scores["Note"], ranges)
assessment_scores["Bin"] = bins
assessment_grouped = (
assessment_scores.reset_index()
.groupby("Bin")
.agg({"Bareme": "count", "Eleve": lambda x: "\n".join(x)})
)
assessment_grouped.index = assessment_grouped.index.map(lambda i: i.right)
fig = go.Figure()
fig.add_bar(
x=assessment_grouped.index,
y=assessment_grouped.Bareme,
text=assessment_grouped.Eleve,
textposition="auto",
hovertemplate="",
marker_color="#4E89DE",
)
fig.update_layout(
height=300,
margin=dict(l=5, r=5, b=5, t=5),
)
return [fig]
# @app.callback(
# [
# dash.dependencies.Output("fig_competences", "figure"),
# ],
# [dash.dependencies.Input("scores_table", "data")],
# )
# def update_competence_fig(data):
# scores = pd.DataFrame.from_records(data)
# scores = flat_df_students(scores).dropna(subset=["Score"])
# scores = pp_q_scores(scores)
# pt = pd.pivot_table(
# scores,
# index=["Exercice", "Question", "Commentaire"],
# columns="Score",
# aggfunc="size",
# fill_value=0,
# )
# for i in {i for i in pt.index.get_level_values(0)}:
# pt.loc[(str(i), "", ""), :] = ""
# pt.sort_index(inplace=True)
# index = (
# pt.index.get_level_values(0)
# + ":"
# + pt.index.get_level_values(1)
# + " "
# + pt.index.get_level_values(2)
# )
#
# fig = go.Figure()
# bars = [
# {"score": -1, "name": "Pas de réponse", "color": COLORS["."]},
# {"score": 0, "name": "Faut", "color": COLORS[0]},
# {"score": 1, "name": "Peu juste", "color": COLORS[1]},
# {"score": 2, "name": "Presque juste", "color": COLORS[2]},
# {"score": 3, "name": "Juste", "color": COLORS[3]},
# ]
# for b in bars:
# try:
# fig.add_bar(
# x=index, y=pt[b["score"]], name=b["name"], marker_color=b["color"]
# )
# except KeyError:
# pass
# fig.update_layout(barmode="relative")
# return [fig]
@app.callback(
[
dash.dependencies.Output("lastsave", "children"),
dash.dependencies.Output("lastsave", "color"),
],
[
dash.dependencies.Input("scores_table", "data"),
dash.dependencies.State("csv", "value"),
],
)
def save_scores(data, csv):
try:
scores = pd.DataFrame.from_records(data)
scores.to_csv(csv, index=False)
except:
return [f"Soucis pour sauvegarder à {datetime.today()} dans {csv}"], "warning"
else:
return [f"Dernière sauvegarde {datetime.today()} dans {csv}"], "success"
def highlight_value(df):
""" Cells style """
hight = []
for v, color in COLORS.items():
hight += [
{
"if": {"filter_query": "{{{}}} = {}".format(col, v), "column_id": col},
"backgroundColor": color,
"color": "white",
}
for col in df.columns
if col not in NO_ST_COLUMNS.values()
]
return hight
@app.callback(
[
dash.dependencies.Output("scores_table", "columns"),
dash.dependencies.Output("scores_table", "data"),
dash.dependencies.Output("scores_table", "style_data_conditional"),
],
[
dash.dependencies.Input("csv", "value"),
dash.dependencies.Input("btn_add_element", "n_clicks"),
dash.dependencies.State("scores_table", "data"),
],
)
def update_scores_table(csv, add_element, data):
ctx = dash.callback_context
if ctx.triggered[0]['prop_id'] == "csv.value":
stack = pd.read_csv(csv, encoding="UTF8")
elif ctx.triggered[0]['prop_id'] == "btn_add_element.n_clicks":
stack = pd.DataFrame.from_records(data)
infos = pd.DataFrame.from_records([{k: stack.iloc[-1][k] for k in NO_ST_COLUMNS.values()}])
stack = stack.append(infos)
return (
[{"id": c, "name": c} for c in stack.columns],
stack.to_dict("records"),
highlight_value(stack),
)

View File

@@ -6,7 +6,9 @@ import numpy as np
from math import ceil, floor
from .config import COLUMNS, VALIDSCORE
# Values manipulations
"""
Functions for manipulate score dataframes
"""
def round_half_point(val):
@@ -19,12 +21,13 @@ def round_half_point(val):
def score_to_mark(x):
""" Compute the mark
"""Compute the mark
if the item is leveled then the score is multiply by the score_rate
otherwise it copies the score
:param x: dictionnary with COLUMNS["is_leveled"], COLUMNS["score"] and COLUMNS["score_rate"] keys
:return: the mark
>>> d = {"Eleve":["E1"]*6 + ["E2"]*6,
... COLUMNS["score_rate"]:[1]*2+[2]*2+[2]*2 + [1]*2+[2]*2+[2]*2,
@@ -43,9 +46,10 @@ def score_to_mark(x):
if x[COLUMNS["is_leveled"]]:
if x[COLUMNS["score"]] not in [0, 1, 2, 3]:
raise ValueError(f"The evaluation is out of range: {x[COLUMNS['score']]} at {x}")
#return round_half_point(x[COLUMNS["score"]] * x[COLUMNS["score_rate"]] / 3)
return round(x[COLUMNS["score"]] * x[COLUMNS["score_rate"]] / 3, 2)
raise ValueError(
f"The evaluation is out of range: {x[COLUMNS['score']]} at {x}"
)
return round_half_point(x[COLUMNS["score"]] * x[COLUMNS["score_rate"]] / 3)
if x[COLUMNS["score"]] > x[COLUMNS["score_rate"]]:
raise ValueError(
@@ -55,9 +59,10 @@ def score_to_mark(x):
def score_to_level(x):
""" Compute the level (".",0,1,2,3).
"""Compute the level (".",0,1,2,3).
:param x: dictionnary with COLUMNS["is_leveled"], COLUMNS["score"] and COLUMNS["score_rate"] keys
:return: the level
>>> d = {"Eleve":["E1"]*6 + ["E2"]*6,
... COLUMNS["score_rate"]:[1]*2+[2]*2+[2]*2 + [1]*2+[2]*2+[2]*2,
@@ -92,7 +97,9 @@ def score_to_level(x):
def compute_mark(df):
""" Add Mark column to df
"""Compute the mark for the dataframe
apply score_to_mark to each row
:param df: DataFrame with COLUMNS["score"], COLUMNS["is_leveled"] and COLUMNS["score_rate"] columns.
@@ -123,9 +130,12 @@ def compute_mark(df):
def compute_level(df):
""" Add Mark column to df
"""Compute level for the dataframe
Applies score_to_level to each row
:param df: DataFrame with COLUMNS["score"], COLUMNS["is_leveled"] and COLUMNS["score_rate"] columns.
:return: Columns with level
>>> d = {"Eleve":["E1"]*6 + ["E2"]*6,
... COLUMNS["score_rate"]:[1]*2+[2]*2+[2]*2 + [1]*2+[2]*2+[2]*2,
@@ -154,9 +164,10 @@ def compute_level(df):
def compute_normalized(df):
""" Compute the normalized mark (Mark / score_rate)
"""Compute the normalized mark (Mark / score_rate)
:param df: DataFrame with "Mark" and COLUMNS["score_rate"] columns
:return: column with normalized mark
>>> d = {"Eleve":["E1"]*6 + ["E2"]*6,
... COLUMNS["score_rate"]:[1]*2+[2]*2+[2]*2 + [1]*2+[2]*2+[2]*2,
@@ -187,7 +198,9 @@ def compute_normalized(df):
def pp_q_scores(df):
""" Postprocessing questions scores dataframe
"""Postprocessing questions scores dataframe
Add 3 columns: mark, level and normalized
:param df: questions-scores dataframe
:return: same data frame with mark, level and normalize columns

View File

@@ -1,10 +0,0 @@
#!/usr/bin/env python
# encoding: utf-8
import yaml
CONFIGPATH = "recoconfig.yml"
with open(CONFIGPATH, "r") as configfile:
config = yaml.load(configfile, Loader=yaml.FullLoader)

132
recopytex/scripts/exam.py Normal file
View File

@@ -0,0 +1,132 @@
#!/usr/bin/env python
# encoding: utf-8
from datetime import datetime
from pathlib import Path
from prompt_toolkit import HTML
import yaml
from .getconfig import config
class Exam:
def __init__(self, name, tribename, date, term, **kwrds):
self._name = name
self._tribename = tribename
try:
self._date = datetime.strptime(date, "%y%m%d")
except:
self._date = date
self._term = term
self._exercises = {}
@property
def name(self):
return self._name
@property
def tribename(self):
return self._tribename
@property
def date(self):
return self._date
@property
def term(self):
return self._term
def add_exercise(self, name, questions):
""" Add key with questions in ._exercises """
try:
self._exercises[name]
except KeyError:
self._exercises[name] = questions
else:
raise KeyError("The exercise already exsists. Use modify_exercise")
def modify_exercise(self, name, questions, append=False):
"""Modify questions of an exercise
If append==True, add questions to the exercise questions
"""
try:
self._exercises[name]
except KeyError:
raise KeyError("The exercise already exsists. Use modify_exercise")
else:
if append:
self._exercises[name] += questions
else:
self._exercises[name] = questions
@property
def exercices(self):
return self._exercises
@property
def tribe_path(self):
return Path(config["source"]) / self.tribename
@property
def tribe_student_path(self):
return (
Path(config["source"])
/ [t["students"] for t in config["tribes"] if t["name"] == self.tribename][
0
]
)
@property
def long_name(self):
""" Get exam name with date inside """
return f"{self.date.strftime('%y%m%d')}_{self.name}"
def path(self, extention=""):
return self.tribe_path / (self.long_name + extention)
def to_dict(self):
return {
"name": self.name,
"tribename": self.tribename,
"date": self.date,
"term": self.term,
"exercices": self.exercices,
}
def to_row(self):
rows = []
for ex, questions in self.exercices.items():
for q in questions:
rows.append(
{
"term": self.term,
"assessment": self.name,
"date": self.date.strftime("%d/%m/%Y"),
"exercise": ex,
"question": q["id"],
**q,
}
)
return rows
@property
def themes(self):
themes = set()
for questions in self._exercises.values():
themes.update([q["theme"] for q in questions])
return themes
def display_exercise(self, name):
pass
def display(self, name):
pass
def write(self):
print(f"Sauvegarde temporaire dans {self.path('.yml')}")
self.tribe_path.mkdir(exist_ok=True)
with open(self.path(".yml"), "w") as f:
f.write(yaml.dump(self.to_dict()))

View File

@@ -0,0 +1,9 @@
#!/usr/bin/env python
# encoding: utf-8
import yaml
CONFIGPATH = "recoconfig.yml"
with open(CONFIGPATH, "r") as config:
config = yaml.load(config, Loader=yaml.FullLoader)

View File

@@ -1,160 +0,0 @@
#!/usr/bin/env python
# encoding: utf-8
import click
from pathlib import Path
from datetime import datetime
from PyInquirer import prompt, print_json
import pandas as pd
import numpy as np
from .config import config
from ..config import NO_ST_COLUMNS
class PromptAbortException(EOFError):
def __init__(self, message, errors=None):
# Call the base class constructor with the parameters it needs
super(PromptAbortException, self).__init__("Abort questionnary", errors)
def get_tribes(answers):
""" List tribes based on subdirectory of config["source"] which have an "eleves.csv" file inside """
return [
p.name for p in Path(config["source"]).iterdir() if (p / "eleves.csv").exists()
]
def prepare_csv():
items = new_eval()
item = items[0]
# item = {"tribe": "308", "date": datetime.today(), "assessment": "plop"}
csv_output = (
Path(config["source"])
/ item["tribe"]
/ f"{item['date']:%y%m%d}_{item['assessment']}.csv"
)
students = pd.read_csv(Path(config["source"]) / item["tribe"] / "eleves.csv")["Nom"]
columns = list(NO_ST_COLUMNS.keys())
items = [[it[c] for c in columns] for it in items]
columns = list(NO_ST_COLUMNS.values())
items_df = pd.DataFrame.from_records(items, columns=columns)
for s in students:
items_df[s] = np.nan
items_df.to_csv(csv_output, index=False, date_format="%d/%m/%Y")
click.echo(f"Saving csv file to {csv_output}")
def new_eval(answers={}):
click.echo(f"Préparation d'un nouveau devoir")
eval_questions = [
{"type": "input", "name": "assessment", "message": "Nom de l'évaluation",},
{
"type": "list",
"name": "tribe",
"message": "Classe concernée",
"choices": get_tribes,
},
{
"type": "input",
"name": "date",
"message": "Date du devoir (%y%m%d)",
"default": datetime.today().strftime("%y%m%d"),
"filter": lambda val: datetime.strptime(val, "%y%m%d"),
},
{
"type": "list",
"name": "term",
"message": "Trimestre",
"choices": ["1", "2", "3"],
},
]
eval_ans = prompt(eval_questions)
items = []
add_exo = True
while add_exo:
ex_items = new_exercice(eval_ans)
items += ex_items
add_exo = prompt(
[
{
"type": "confirm",
"name": "add_exo",
"message": "Ajouter un autre exercice",
"default": True,
}
]
)["add_exo"]
return items
def new_exercice(answers={}):
exercise_questions = [
{"type": "input", "name": "exercise", "message": "Nom de l'exercice"},
]
click.echo(f"Nouvel exercice")
exercise_ans = prompt(exercise_questions, answers=answers)
items = []
add_item = True
while add_item:
try:
item_ans = new_item(exercise_ans)
except PromptAbortException:
click.echo("Création de l'item annulée")
else:
items.append(item_ans)
add_item = prompt(
[
{
"type": "confirm",
"name": "add_item",
"message": f"Ajouter un autre item pour l'exercice {exercise_ans['exercise']}",
"default": True,
}
]
)["add_item"]
return items
def new_item(answers={}):
item_questions = [
{"type": "input", "name": "question", "message": "Nom de l'item",},
{"type": "input", "name": "comment", "message": "Commentaire",},
{
"type": "list",
"name": "competence",
"message": "Competence",
"choices": ["Cher", "Rep", "Mod", "Rai", "Cal", "Com"],
},
{"type": "input", "name": "theme", "message": "Domaine",},
{
"type": "confirm",
"name": "is_leveled",
"message": "Évaluation par niveau",
"default": True,
},
{"type": "input", "name": "score_rate", "message": "Bareme"},
{
"type": "confirm",
"name": "correct",
"message": "Tout est correct?",
"default": True,
},
]
click.echo(f"Nouvelle question pour l'exercice {answers['exercise']}")
item_ans = prompt(item_questions, answers=answers)
if item_ans["correct"]:
return item_ans
raise PromptAbortException("Abort item creation")

View File

@@ -0,0 +1,233 @@
#!/usr/bin/env python
# encoding: utf-8
from prompt_toolkit import prompt, HTML, ANSI
from prompt_toolkit import print_formatted_text as print
from prompt_toolkit.styles import Style
from prompt_toolkit.validation import Validator
from prompt_toolkit.completion import WordCompleter
from unidecode import unidecode
from datetime import datetime
from functools import wraps
import sys
from .getconfig import config
VALIDATE = [
"o",
"ok",
"OK",
"oui",
"OUI",
"yes",
"YES",
]
REFUSE = ["n", "non", "NON", "no", "NO"]
CANCEL = ["a", "annuler"]
STYLE = Style.from_dict(
{
"": "#93A1A1",
"validation": "#884444",
"appending": "#448844",
}
)
class CancelError(Exception):
pass
def prompt_validate(question, cancelable=False, empty_means=1, style="validation"):
"""Prompt for validation
:param question: Text to print to ask the question.
:param cancelable: enable cancel answer
:param empty_means: result for no answer
:return:
0 -> Refuse
1 -> Validate
-1 -> cancel
"""
question_ = question
choices = VALIDATE + REFUSE
if cancelable:
question_ += "(a ou annuler pour sortir)"
choices += CANCEL
ans = prompt(
[
(f"class:{style}", question_),
],
completer=WordCompleter(choices),
style=STYLE,
).lower()
if ans == "":
return empty_means
if ans in VALIDATE:
return 1
if cancelable and ans in CANCEL:
return -1
return 0
def prompt_until_validate(question="C'est ok? ", cancelable=False):
def decorator(func):
@wraps(func)
def wrapper(*args, **kwrd):
ans = func(*args, **kwrd)
confirm = prompt_validate(question, cancelable)
if confirm == -1:
raise CancelError
while not confirm:
sys.stdout.flush()
ans = func(*args, **ans, **kwrd)
confirm = prompt_validate(question, cancelable)
if confirm == -1:
raise CancelError
return ans
return wrapper
return decorator
@prompt_until_validate()
def prompt_exam(**kwrd):
""" Prompt questions to edit an exam """
print(HTML("<b>Nouvelle évaluation</b>"))
exam = {}
exam["name"] = prompt("Nom de l'évaluation: ", default=kwrd.get("name", "DS"))
tribes_name = [t["name"] for t in config["tribes"]]
exam["tribename"] = prompt(
"Nom de la classe: ",
default=kwrd.get("tribename", ""),
completer=WordCompleter(tribes_name),
validator=Validator.from_callable(lambda x: x in tribes_name),
)
exam["tribe"] = [t for t in config["tribes"] if t["name"] == exam["tribename"]][0]
exam["date"] = prompt(
"Date de l'évaluation (%y%m%d): ",
default=kwrd.get("date", datetime.today()).strftime("%y%m%d"),
validator=Validator.from_callable(lambda x: (len(x) == 6) and x.isdigit()),
)
exam["date"] = datetime.strptime(exam["date"], "%y%m%d")
exam["term"] = prompt(
"Trimestre: ",
validator=Validator.from_callable(lambda x: x.isdigit()),
default=kwrd.get("term", "1"),
)
return exam
@prompt_until_validate()
def prompt_exercise(number=1, completer={}, **kwrd):
exercise = {}
try:
kwrd["name"]
except KeyError:
print(HTML("<b>Nouvel exercice</b>"))
exercise["name"] = prompt(
"Nom de l'exercice: ", default=kwrd.get("name", f"Exercice {number}")
)
else:
print(HTML(f"<b>Modification de l'exercice: {kwrd['name']}</b>"))
exercise["name"] = kwrd["name"]
exercise["questions"] = []
try:
kwrd["questions"][0]
except KeyError:
last_question_id = "1a"
except IndexError:
last_question_id = "1a"
else:
for ques in kwrd["questions"]:
try:
exercise["questions"].append(
prompt_question(completer=completer, **ques)
)
except CancelError:
print("Cette question a été supprimée")
last_question_id = exercise["questions"][-1]["id"]
appending = prompt_validate(
question="Ajouter un élément de notation? ", style="appending"
)
while appending:
try:
exercise["questions"].append(
prompt_question(last_question_id, completer=completer)
)
except CancelError:
print("Cette question a été supprimée")
else:
last_question_id = exercise["questions"][-1]["id"]
appending = prompt_validate(
question="Ajouter un élément de notation? ", style="appending"
)
return exercise
@prompt_until_validate(cancelable=True)
def prompt_question(last_question_id="1a", completer={}, **kwrd):
try:
kwrd["id"]
except KeyError:
print(HTML("<b>Nouvel élément de notation</b>"))
else:
print(
HTML(f"<b>Modification de l'élément {kwrd['id']} ({kwrd['comment']})</b>")
)
question = {}
question["id"] = prompt(
"Identifiant de la question: ",
default=kwrd.get("id", "1a"),
)
question["competence"] = prompt(
"Competence: ",
default=kwrd.get("competence", list(config["competences"].keys())[0]),
completer=WordCompleter(config["competences"].keys()),
validator=Validator.from_callable(lambda x: x in config["competences"].keys()),
)
question["theme"] = prompt(
"Domaine: ",
default=kwrd.get("theme", ""),
completer=WordCompleter(completer.get("theme", [])),
)
question["comment"] = prompt(
"Commentaire: ",
default=kwrd.get("comment", ""),
)
question["is_leveled"] = prompt(
"Évaluation par niveau: ",
default=kwrd.get("is_leveled", "1"),
# validate
)
question["score_rate"] = prompt(
"Barème: ",
default=kwrd.get("score_rate", "1"),
# validate
)
return question

View File

@@ -3,13 +3,17 @@
import click
from pathlib import Path
import yaml
import sys
import papermill as pm
import pandas as pd
from datetime import datetime
import yaml
from .prepare_csv import prepare_csv
from .config import config
from .getconfig import config, CONFIGPATH
from .prompts import prompt_exam, prompt_exercise, prompt_validate
from ..config import NO_ST_COLUMNS
from .exam import Exam
from ..dashboard.exam import app as exam_app
@click.group()
@@ -24,8 +28,81 @@ def print_config():
click.echo(config)
def reporting(csv_file):
# csv_file = Path(csv_file)
@cli.command()
def setup():
"""Setup the environnement using recoconfig.yml"""
for tribe in config["tribes"]:
Path(tribe["name"]).mkdir(exist_ok=True)
if not Path(tribe["students"]).exists():
print(f"The file {tribe['students']} does not exists")
@cli.command()
def new_exam():
""" Create new exam csv file """
exam = Exam(**prompt_exam())
if exam.path(".yml").exists():
print(f"Fichier sauvegarde trouvé à {exam.path('.yml')} -- importation")
with open(exam.path(".yml"), "r") as f:
for name, questions in yaml.load(f, Loader=yaml.SafeLoader)[
"exercices"
].items():
exam.add_exercise(name, questions)
print(exam.themes)
# print(yaml.dump(exam.to_dict()))
exam.write()
for name, questions in exam.exercices.items():
exam.modify_exercise(
**prompt_exercise(
name=name, completer={"theme": exam.themes}, questions=questions
)
)
exam.write()
new_exercise = prompt_validate("Ajouter un exercice? ")
while new_exercise:
exam.add_exercise(
**prompt_exercise(len(exam.exercices) + 1, completer={"theme": exam.themes})
)
exam.write()
new_exercise = prompt_validate("Ajouter un exercice? ")
rows = exam.to_row()
base_df = pd.DataFrame.from_dict(rows)[NO_ST_COLUMNS.keys()]
base_df.rename(columns=NO_ST_COLUMNS, inplace=True)
students = pd.read_csv(exam.tribe_student_path)["Nom"]
for student in students:
base_df[student] = ""
exam.tribe_path.mkdir(exist_ok=True)
base_df.to_csv(exam.path(".csv"), index=False)
print(f"Le fichier note a été enregistré à {exam.path('.csv')}")
@cli.command()
def exam_analysis():
exam_app.run_server(debug=True)
@cli.command()
@click.argument("csv_file")
def report(csv_file):
csv = Path(csv_file)
if not csv.exists():
click.echo(f"{csv_file} does not exists")
sys.exit(1)
if csv.suffix != ".csv":
click.echo(f"{csv_file} has to be a csv file")
sys.exit(1)
csv_file = Path(csv_file)
tribe_dir = csv_file.parent
csv_filename = csv_file.name.split(".")[0]
@@ -34,7 +111,7 @@ def reporting(csv_file):
try:
date = datetime.strptime(date, "%y%m%d")
except ValueError:
date = datetime.today().strptime(date, "%y%m%d")
date = None
tribe = str(tribe_dir).split("/")[-1]
@@ -54,49 +131,3 @@ def reporting(csv_file):
csv_file=str(csv_file.absolute()),
),
)
@cli.command()
@click.argument("target", required=False)
def report(target=""):
""" Make a report for the eval
:param target: csv file or a directory where csvs are
"""
try:
if target.endswith(".csv"):
csv = Path(target)
if not csv.exists():
click.echo(f"{target} does not exists")
sys.exit(1)
if csv.suffix != ".csv":
click.echo(f"{target} has to be a csv file")
sys.exit(1)
csvs = [csv]
else:
csvs = list(Path(target).glob("**/*.csv"))
except AttributeError:
csvs = list(Path(config["source"]).glob("**/*.csv"))
for csv in csvs:
click.echo(f"Processing {csv}")
try:
reporting(csv)
except pm.exceptions.PapermillExecutionError as e:
click.echo(f"Error with {csv}: {e}")
@cli.command()
def prepare():
""" Prepare csv file """
items = prepare_csv()
click.echo(items)
@cli.command()
@click.argument("tribe")
def random_pick(tribe):
""" Randomly pick a student """
pass

View File

@@ -1,76 +1,4 @@
ansiwrap==0.8.4
appdirs==1.4.3
attrs==19.1.0
backcall==0.1.0
black==19.10b0
bleach==3.1.0
certifi==2019.6.16
chardet==3.0.4
Click==7.0
colorama==0.4.1
cycler==0.10.0
decorator==4.4.0
defusedxml==0.6.0
entrypoints==0.3
future==0.17.1
idna==2.8
importlib-resources==1.0.2
ipykernel==5.1.3
ipython==7.11.1
ipython-genutils==0.2.0
ipywidgets==7.5.1
jedi==0.15.2
Jinja2==2.10.3
jsonschema==3.2.0
jupyter==1.0.0
jupyter-client==5.3.4
jupyter-console==6.1.0
jupyter-core==4.6.1
jupytex==0.0.3
kiwisolver==1.1.0
Markdown==3.1.1
MarkupSafe==1.1.1
matplotlib==3.1.2
mistune==0.8.4
nbconvert==5.6.1
nbformat==5.0.3
notebook==6.0.3
numpy==1.18.1
pandas==0.25.3
pandocfilters==1.4.2
papermill==1.2.1
parso==0.5.2
pathspec==0.7.0
pexpect==4.8.0
pickleshare==0.7.5
prometheus-client==0.7.1
prompt-toolkit==1.0.14
ptyprocess==0.6.0
Pygments==2.5.2
PyInquirer==1.0.3
pyparsing==2.4.6
pyrsistent==0.15.7
python-dateutil==2.8.0
pytz==2019.3
PyYAML==5.3
pyzmq==18.1.1
qtconsole==4.6.0
-e git+git_opytex:/lafrite/recopytex.git@7e026bedb24c1ca8bef3b71b3d63f8b0d6916e81#egg=Recopytex
regex==2020.1.8
requests==2.22.0
scipy==1.4.1
Send2Trash==1.5.0
six==1.12.0
tenacity==6.0.0
terminado==0.8.3
testpath==0.4.4
textwrap3==0.9.2
toml==0.10.0
tornado==6.0.3
tqdm==4.41.1
traitlets==4.3.2
typed-ast==1.4.1
urllib3==1.25.8
wcwidth==0.1.8
webencodings==0.5.1
widgetsnbextension==3.5.1
pandas
click
papermill
prompt_toolkit

69
requirements_dev.txt Normal file
View File

@@ -0,0 +1,69 @@
ansiwrap
attrs
backcall
bleach
certifi
chardet
Click
colorama
cycler
decorator
defusedxml
entrypoints
future
idna
importlib-resources
ipykernel
ipython
ipython-genutils
ipywidgets
jedi
Jinja2
jsonschema
jupyter
jupyter-client
jupyter-console
jupyter-core
jupytex
kiwisolver
MarkupSafe
matplotlib
mistune
nbconvert
nbformat
notebook
numpy
pandas
pandocfilters
papermill
parso
pexpect
pickleshare
prometheus-client
prompt-toolkit
ptyprocess
Pygments
pyparsing
pyrsistent
python-dateutil
pytz
PyYAML
pyzmq
qtconsole
-e git+git_opytex:/lafrite/recopytex.git@e9a8310f151ead60434ae944d726a2fd22b23d06#egg=Recopytex
requests
scipy
seaborn
Send2Trash
six
tenacity
terminado
testpath
textwrap3
tornado
tqdm
traitlets
urllib3
wcwidth
webencodings
widgetsnbextension

View File

@@ -5,7 +5,7 @@ from setuptools import setup, find_packages
setup(
name='Recopytex',
version='1.1.1',
version='0.1',
description='Assessment analysis',
author='Benjamin Bertrand',
author_email='',
@@ -13,11 +13,6 @@ setup(
include_package_data=True,
install_requires=[
'Click',
'pandas',
'numpy',
'papermill',
'pyyaml',
'PyInquirer',
],
entry_points='''
[console_scripts]