You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -47,7 +46,7 @@ Make sure you have set your OPENAI_API_KEY environment variable to your [openai
47
46
48
47
To use paper-qa, you need to have a list of paths (valid extensions include: .pdf, .txt, .jpg, .pptx, .docx, .csv, .epub, .md, .mp4, .mp3) and a list of citations (strings) that correspond to the paths. You can then use the `Docs` class to add the documents and then query them.
49
48
50
-
*This uses a lot of tokens!! About 20-30k tokens per answer + embedding cost (negligible unless many documents used). That is about $0.50 per answer with current GPT-3 pricing. Use wisely.*
49
+
*This uses a lot of tokens!! About 10-30k tokens per answer + embedding cost (negligible unless many documents used). That is up to $0.50 per answer with current GPT-3 pricing. Use wisely.*
51
50
52
51
```python
53
52
@@ -64,7 +63,7 @@ answer = docs.query("What manufacturing challenges are unique to bispecific anti
64
63
print(answer.formatted_answer)
65
64
```
66
65
67
-
The answer object has the following attributes: `formatted_answer`, `answer` (answer alone), `questions`, `context` (the summaries of passages found for answer), `refernces` (the docs from which the passages came).
66
+
The answer object has the following attributes: `formatted_answer`, `answer` (answer alone), `question`, `context` (the summaries of passages found for answer), `references` (the docs from which the passages came).
68
67
69
68
## Adjusting number of sources
70
69
@@ -74,6 +73,8 @@ You can adjust the numbers of sources/passages to reduce token usage or add more
74
73
docs.query("What manufacturing challenges are unique to bispecific antibodies?", k=1, max_sources=3)
75
74
```
76
75
76
+
## FAQ
77
+
77
78
### How is this different from gpt-index?
78
79
79
80
gpt-index does generate answers, but in a somewhat opinionated way. It doesn't have a great way to track where text comes from and it's not easy to force it to pull from multiple documents. I don't know which way is better, but for writing scholarly text I found it to work better to pull from multiple relevant documents and then generate an answer. I would like to PR to do this to gpt-index but it looks pretty involved right now.
@@ -82,7 +83,7 @@ gpt-index does generate answers, but in a somewhat opinionated way. It doesn't h
82
83
83
84
I use some of my own code to pull papers from Google Scholar. This code is not included because it may enable people to violate Google's terms of service and publisher's terms of service.
84
85
85
-
### Saving/loading
86
+
### Can I saving/loading?
86
87
87
88
The `Docs` class can be pickled and unpickled. This is useful if you want to save the embeddings of the documents and then load them later.
0 commit comments