You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
An API to classify editor account using LodBrok(keras model) in the backend. Also has an option to retrain the model for future enhancement by taking incorrect predictions into consideration as per SpamNinja feedback.
3
+
4
+
### Steps to run the API:
5
+
6
+
1) Install all the dependencies needed in virtual environment:
7
+
8
+
```
9
+
pip install -r requirements.txt
10
+
```
11
+
2) In spambrainz folder, set certain environmental variables in the terminal to run the API:
12
+
```
13
+
$ export FLASK_APP=sb_api.py
14
+
$ export FLASK_DEBUG=1 # only for development purpose
15
+
$ export FLASK_RUN_PORT=4321 # the api requests are sent to this port number
1) There are two request codes namely classify_request.py and train_request.py for /predict and /train respectively.
4
+
2) The classify_request.py sents a spam editor account to be classified by the lodbrok model running in the backend. The command ```python classify_request.py``` gives the following output:
5
+
6
+

7
+
3) The train_request.py sents a spam editor accounts along with **verdict** given by SpamNinja as spam or not. The command ```python train_request.py``` gives the following output:
8
+
9
+

10
+
11
+
More details regarding the API functioning is written [here](app/README.md).
- The initialization of the application is done in **__ init __.py** where the flask, redis, and model instance are initialized.
8
+
9
+
- The **. ./sb_api.py** takes the above instances and runs the application.
10
+
- (Note: This is to avoid circular imports in Flask)
11
+
12
+
- The **classify.py** contains classify_process() which classifies by retrieving the data stored in redis.
13
+
- First, it converts the given data JSON data to be ready for the model to predict on, for this:
14
+
- It converts the datetime object stored as string in JSON back to a datetime object with the help of string_to_dateitime function.
15
+
- Converts the JSON data to an np array with the help of preprocess_editor function.
16
+
- After the preprocessed data is obtained, it performs the necessary predictions and stores them back into redis to be retrieved later.
17
+
- The editor details stored in redis are removed form the queue
18
+
19
+
- The **routes.py** contains all the necessary API endpoints /predict and /train which are called when post requests are made to the API.
20
+
- The **/predict** endpoint:
21
+
- In this endpoint the input JSON data is pushed into the redis queue after converting into compatible form.
22
+
- The editor ids are stored before hand to retrieve the results mapped with ids stored in redis by classify_process, later on.
23
+
- Unfilled details such as area or bio are set as None to be compatible later on.
24
+
- classify_process() with size is called to load the model and classify the editor accounts retrived back from redis.
25
+
- The results stored in redis by classify_process are retrieved back and sent back to SpamNinja in a JSON format.
26
+
- The **/train** endpoint:
27
+
- In this endpoint the input JSON data with additional **verdict** parameter and editor details are directly converted into compatible format(np array) to be used by model to retrain.
28
+
- Unfilled details such as area or bio are set as None to be compatible later on and also the datetime objects stored as string in JSON back to a datetime object with the help of string_to_dateitime function.
29
+
- Using preprocess_editor converts finally into an np array.
30
+
- The preprocessed data is sent to retrain_model function to retrain the model.
31
+
- If successfully retrained, a success JSON message is sent.
32
+
33
+
- The **preprocessing.py** is used to preprocess the JSON/Dict editor data into proper np array for the model to predict/train on, this uses the initial tokenizers created to convert each parameter properly.
34
+
35
+
- The **train.py** contains the retraining part of the model.
36
+
- The data sent to retrain_model is used to retrain the model.
37
+
- The learning rate of the optimizer(Adam) is set to 0.001(very low static value) to learn from new patterns while also keeping in my mind not to forget old learnings(drastic forgetting).
38
+
- The current model is saved as previous_lodbrok.h5 in ../statis/weights/ for future reference if we wanted to come back.
39
+
- Then it calls the train_model function which continues the model learning with new data, The batch is set to only 1 with 2 epochs so as to keep balanced learning.
40
+
- After the training is done, the new model is saved with current_lodbrok.h5 to overwrite over the old model and continue classifying new data over time.
41
+
- The original lodbrok weights are saved in original_lodbrok.h5 to come back and trace the progress done so far.
42
+
-**Benefits of using this method are:**
43
+
- No need to maintain any extra database to store new data sent by SpamNinja and maintain the old data on which the model is trained.
44
+
- There is no fixed size to the batch of data sent to the model, it can be any number of models.
45
+
- The old learnings of the model won't be forgotten soon due to **slow static learning rate** and also the structure of data being the same forever.
0 commit comments