Skip to content

Commit 1950e96

Browse files
committed
Add detailed documentation of API
- Added detailed documentation about installation, request results, and internal working of API. - Added some more comments where ever necessary. - Added an original_lodbrok weight for backup. - Removed unnecessary files and images.
1 parent cd03048 commit 1950e96

File tree

12 files changed

+94
-59
lines changed

12 files changed

+94
-59
lines changed

README.md

Lines changed: 30 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,21 +1,40 @@
1-
# SB_API
2-
Editor account prediction using LodBrok model in the backend.
1+
# SpamBrainz API
2+
An API to classify editor account using LodBrok(keras model) in the backend. Also has an option to retrain the model for future enhancement by taking incorrect predictions into consideration as per SpamNinja feedback.
33

4-
Install all the dependencies needed in virtual environment:
4+
### Steps to run the API:
5+
6+
1) Install all the dependencies needed in virtual environment:
57

68
```
7-
pip install -r requirements.txt
9+
pip install -r requirements.txt
10+
```
11+
2) In spambrainz folder, set certain environmental variables in the terminal to run the API:
12+
```
13+
$ export FLASK_APP=sb_api.py
14+
$ export FLASK_DEBUG=1 # only for development purpose
15+
$ export FLASK_RUN_PORT=4321 # the api requests are sent to this port number
816
```
917

10-
Run the flask application from root folder in virutal environment:
11-
18+
3) Install Redis:
1219
```
13-
python run_keras_server.py
20+
$ wget http://download.redis.io/redis-stable.tar.gz
21+
$ tar xvzf redis-stable.tar.gz
22+
$ cd redis-stable
23+
$ make
24+
$ sudo make install
25+
```
26+
4) Run redis separate terminal to store the get the data sent to SB_API
27+
```
28+
$ redis-server
29+
```
30+
31+
5) With this all the dependencies are set to run the server, now simply run the server by
32+
```
33+
$ flask run
1434
```
15-
Go to http://localhost:5000/static/editor.html
1635

17-
Enter editor account detials, press prediction to get prediciton.
36+
This should run the API in the specified port in debug mode
1837

19-
This is a sample prediction done using LodBrok:
38+
The detailed internal functioning of API is present [here](spambrainz/app/README.md)
2039

21-
![](/spambrainz/static/images/prediciton.png)
40+
The request used their details and the output are present [here](spambrainz/README.md)

spambrainz/README.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
## Functioning of requests:
2+
3+
1) There are two request codes namely classify_request.py and train_request.py for /predict and /train respectively.
4+
2) The classify_request.py sents a spam editor account to be classified by the lodbrok model running in the backend. The command ```python classify_request.py``` gives the following output:
5+
6+
![](static/images/classify_request.png)
7+
3) The train_request.py sents a spam editor accounts along with **verdict** given by SpamNinja as spam or not. The command ```python train_request.py``` gives the following output:
8+
9+
![](static/images/train_request.png)
10+
11+
More details regarding the API functioning is written [here](app/README.md).

spambrainz/app/README.md

Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
## Internal Functioning of API:
2+
3+
This is the structure in the API:
4+
5+
![](../static/images/api_structure.png)
6+
7+
- The initialization of the application is done in **__ init __.py** where the flask, redis, and model instance are initialized.
8+
9+
- The **. ./sb_api.py** takes the above instances and runs the application.
10+
- (Note: This is to avoid circular imports in Flask)
11+
12+
- The **classify.py** contains classify_process() which classifies by retrieving the data stored in redis.
13+
- First, it converts the given data JSON data to be ready for the model to predict on, for this:
14+
- It converts the datetime object stored as string in JSON back to a datetime object with the help of string_to_dateitime function.
15+
- Converts the JSON data to an np array with the help of preprocess_editor function.
16+
- After the preprocessed data is obtained, it performs the necessary predictions and stores them back into redis to be retrieved later.
17+
- The editor details stored in redis are removed form the queue
18+
19+
- The **routes.py** contains all the necessary API endpoints /predict and /train which are called when post requests are made to the API.
20+
- The **/predict** endpoint:
21+
- In this endpoint the input JSON data is pushed into the redis queue after converting into compatible form.
22+
- The editor ids are stored before hand to retrieve the results mapped with ids stored in redis by classify_process, later on.
23+
- Unfilled details such as area or bio are set as None to be compatible later on.
24+
- classify_process() with size is called to load the model and classify the editor accounts retrived back from redis.
25+
- The results stored in redis by classify_process are retrieved back and sent back to SpamNinja in a JSON format.
26+
- The **/train** endpoint:
27+
- In this endpoint the input JSON data with additional **verdict** parameter and editor details are directly converted into compatible format(np array) to be used by model to retrain.
28+
- Unfilled details such as area or bio are set as None to be compatible later on and also the datetime objects stored as string in JSON back to a datetime object with the help of string_to_dateitime function.
29+
- Using preprocess_editor converts finally into an np array.
30+
- The preprocessed data is sent to retrain_model function to retrain the model.
31+
- If successfully retrained, a success JSON message is sent.
32+
33+
- The **preprocessing.py** is used to preprocess the JSON/Dict editor data into proper np array for the model to predict/train on, this uses the initial tokenizers created to convert each parameter properly.
34+
35+
- The **train.py** contains the retraining part of the model.
36+
- The data sent to retrain_model is used to retrain the model.
37+
- The learning rate of the optimizer(Adam) is set to 0.001(very low static value) to learn from new patterns while also keeping in my mind not to forget old learnings(drastic forgetting).
38+
- The current model is saved as previous_lodbrok.h5 in ../statis/weights/ for future reference if we wanted to come back.
39+
- Then it calls the train_model function which continues the model learning with new data, The batch is set to only 1 with 2 epochs so as to keep balanced learning.
40+
- After the training is done, the new model is saved with current_lodbrok.h5 to overwrite over the old model and continue classifying new data over time.
41+
- The original lodbrok weights are saved in original_lodbrok.h5 to come back and trace the progress done so far.
42+
- **Benefits of using this method are:**
43+
- No need to maintain any extra database to store new data sent by SpamNinja and maintain the old data on which the model is trained.
44+
- There is no fixed size to the batch of data sent to the model, it can be any number of models.
45+
- The old learnings of the model won't be forgotten soon due to **slow static learning rate** and also the structure of data being the same forever.
46+

spambrainz/app/classify.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,9 @@ def classify_process():
4747

4848
# defining the structure
4949
queue = np.array([queue])
50-
50+
51+
# only data from index 1 is considered while predicting, thus
52+
# not taking the spam value into consideration
5153
predict_data = {
5254
"main_input": np.array(queue[:,1:10]),
5355
"email_input": np.array(queue[:,10]),

spambrainz/app/preprocessing.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ def preprocess_editor(editor, spam=None):
8484
bio = np.zeros(512)
8585

8686
data = np.array([
87-
spam, # spam classification
87+
spam, # spam classification (used only during training the model)
8888
editor["area"] is not None, # Area Set
8989
editor["gender"] is not None, # Gender
9090
editor["birth_date"] is not None, # Birth date set

spambrainz/app/routes.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -42,10 +42,10 @@ def predict():
4242

4343
# the classification done is retrived form redis
4444
output = db.get(editor_id)
45-
45+
output = json.loads(output)
46+
output["id"] = editor_id
4647
if output is not None:
47-
48-
data["predictions"] = json.loads(output)
48+
data["predictions"] = output
4949
db.delete(editor_id)
5050
data["success"] = True
5151

spambrainz/simple_request.py

Lines changed: 0 additions & 43 deletions
This file was deleted.
29.2 KB
Loading
133 KB
Loading
-421 KB
Binary file not shown.

0 commit comments

Comments
 (0)