You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"template": "from foundry_sdk import FoundryClient\nimport foundry_sdk\nfrom pprint import pprint\n\nclient = FoundryClient(auth=foundry_sdk.UserTokenAuth(...), hostname=\"example.palantirfoundry.com\")\n\n# LanguageModelApiName\nanthropic_model_model_id = None\n# int | The maximum number of tokens to generate before stopping.\nmax_tokens = None\n# List[AnthropicMessage] | Input messages to the model. This can include a single user-role message or multiple messages with alternating user and assistant roles.\nmessages = [{\"role\": \"user\"}]\n# Optional[PreviewMode] | Enables the use of preview functionality.\npreview = None\n# Optional[List[str]] | Custom text sequences that will cause the model to stop generating.\nstop_sequences = None\n# Optional[List[AnthropicSystemMessage]] | A system prompt is a way of providing context and instructions to Claude, such as specifying a particular goal or role. As of now, sending multiple system prompts is not supported.\nsystem = None\n# Optional[float] | Amount of randomness injected into the response. Ranges from 0.0 to 1.0. Note that even with temperature of 0.0, the results will not be fully deterministic. Defaults to 1.0\ntemperature = None\n# Optional[AnthropicThinkingConfig] | Configuration for enabling Claude's extended thinking.\nthinking = None\n# Optional[AnthropicToolChoice] | How the model should use the provided tools.\ntool_choice = None\n# Optional[List[AnthropicTool]] | Definitions of tools that the model may use.\ntools = None\n# Optional[int] | Only sample from the top K options for each subsequent token.\ntop_k = None\n# Optional[float] | Use nucleus sampling. You should either alter temperature or top_p, but not both\ntop_p = None\n\n\ntry:\n api_response = client.language_models.AnthropicModel.messages(\n anthropic_model_model_id,\n max_tokens=max_tokens,\n messages=messages,\n preview=preview,\n stop_sequences=stop_sequences,\n system=system,\n temperature=temperature,\n thinking=thinking,\n tool_choice=tool_choice,\n tools=tools,\n top_k=top_k,\n top_p=top_p,\n )\n print(\"The messages response:\\n\")\n pprint(api_response)\nexcept foundry_sdk.PalantirRPCException as e:\n print(\"HTTP error when calling AnthropicModel.messages: %s\\n\" % e)"
1023
+
"template": "from foundry_sdk import FoundryClient\nimport foundry_sdk\nfrom pprint import pprint\n\nclient = FoundryClient(auth=foundry_sdk.UserTokenAuth(...), hostname=\"example.palantirfoundry.com\")\n\n# LanguageModelApiName\nanthropic_model_model_id = None\n# int | The maximum number of tokens to generate before stopping.\nmax_tokens = None\n# List[AnthropicMessage] | Input messages to the model. This can include a single user-role message or multiple messages with alternating user and assistant roles.\nmessages = [{\"role\": \"USER\"}]\n# Optional[PreviewMode] | Enables the use of preview functionality.\npreview = None\n# Optional[List[str]] | Custom text sequences that will cause the model to stop generating.\nstop_sequences = None\n# Optional[List[AnthropicSystemMessage]] | A system prompt is a way of providing context and instructions to Claude, such as specifying a particular goal or role. As of now, sending multiple system prompts is not supported.\nsystem = None\n# Optional[float] | Amount of randomness injected into the response. Ranges from 0.0 to 1.0. Note that even with temperature of 0.0, the results will not be fully deterministic. Defaults to 1.0\ntemperature = None\n# Optional[AnthropicThinkingConfig] | Configuration for enabling Claude's extended thinking.\nthinking = None\n# Optional[AnthropicToolChoice] | How the model should use the provided tools.\ntool_choice = None\n# Optional[List[AnthropicTool]] | Definitions of tools that the model may use.\ntools = None\n# Optional[int] | Only sample from the top K options for each subsequent token.\ntop_k = None\n# Optional[float] | Use nucleus sampling. You should either alter temperature or top_p, but not both\ntop_p = None\n\n\ntry:\n api_response = client.language_models.AnthropicModel.messages(\n anthropic_model_model_id,\n max_tokens=max_tokens,\n messages=messages,\n preview=preview,\n stop_sequences=stop_sequences,\n system=system,\n temperature=temperature,\n thinking=thinking,\n tool_choice=tool_choice,\n tools=tools,\n top_k=top_k,\n top_p=top_p,\n )\n print(\"The messages response:\\n\")\n pprint(api_response)\nexcept foundry_sdk.PalantirRPCException as e:\n print(\"HTTP error when calling AnthropicModel.messages: %s\\n\" % e)"
"template": "from foundry_sdk import FoundryClient\nimport foundry_sdk\nfrom pprint import pprint\n\nclient = FoundryClient(auth=foundry_sdk.UserTokenAuth(...), hostname=\"example.palantirfoundry.com\")\n\n# str | The SQL query to execute. Queries should conform to the [Spark SQL dialect](https://spark.apache.org/docs/latest/sql-ref.html). This supports SELECT queries only. Refer the following [documentation](https://www.palantir.com/docs/foundry/analytics-connectivity/odbc-jdbc-drivers/#use-sql-to-query-foundry-datasets) on the supported syntax for referencing datasets in SQL queries.\nquery = \"SELECT * FROM `/Path/To/Dataset`\"\n# Optional[List[BranchName]] | The list of branch ids to use as fallbacks if the query fails to execute on the primary branch. If a is not explicitly provided in the SQL query, the resource will be queried on the first fallback branch provided that exists. If no fallback branches are provided the default branch is used. This is `master` for most enrollments.\nfallback_branch_ids = [\"master\"]\n# Optional[PreviewMode] | Enables the use of preview functionality.\npreview = None\n\n\ntry:\n api_response = client.sql_queries.SqlQuery.execute(\n query=query, fallback_branch_ids=fallback_branch_ids, preview=preview\n )\n print(\"The execute response:\\n\")\n pprint(api_response)\nexcept foundry_sdk.PalantirRPCException as e:\n print(\"HTTP error when calling SqlQuery.execute: %s\\n\" % e)"
1478
+
"template": "from foundry_sdk import FoundryClient\nimport foundry_sdk\nfrom pprint import pprint\n\nclient = FoundryClient(auth=foundry_sdk.UserTokenAuth(...), hostname=\"example.palantirfoundry.com\")\n\n# str | The SQL query to execute. Queries should conform to the [Spark SQL dialect](https://spark.apache.org/docs/latest/sql-ref.html). This supports SELECT queries only. Datasets can be referenced in SQL queries by path or by RID. See the [documentation](https://www.palantir.com/docs/foundry/analytics-connectivity/odbc-jdbc-drivers/#use-sql-to-query-foundry-datasets) for more details.\nquery = \"SELECT * FROM `/Path/To/Dataset`\"\n# Optional[List[BranchName]] | The list of branch ids to use as fallbacks if the query fails to execute on the primary branch. If a is not explicitly provided in the SQL query, the resource will be queried on the first fallback branch provided that exists. If no fallback branches are provided the default branch is used. This is `master` for most enrollments.\nfallback_branch_ids = [\"master\"]\n# Optional[PreviewMode] | Enables the use of preview functionality.\npreview = None\n\n\ntry:\n api_response = client.sql_queries.SqlQuery.execute(\n query=query, fallback_branch_ids=fallback_branch_ids, preview=preview\n )\n print(\"The execute response:\\n\")\n pprint(api_response)\nexcept foundry_sdk.PalantirRPCException as e:\n print(\"HTTP error when calling SqlQuery.execute: %s\\n\" % e)"
**name** | FieldName | Yes | The name of a column. May be absent in nested schema objects. |
10
+
**name** | Optional[FieldName] | No | The name of a column. May be absent in nested schema objects. |
11
11
**nullable** | bool | Yes | Indicates whether values of this field may be null. |
12
12
**user_defined_type_class** | Optional[str] | No | Canonical classname of the user-defined type for this field. This should be a subclass of Spark's `UserDefinedType`. |
13
13
**custom_metadata** | Optional[CustomMetadata] | No | User-supplied custom metadata about the column, such as Foundry web archetypes, descriptions, etc. |
[[Back to Model list]](../../../../README.md#models-v2-link)[[Back to API list]](../../../../README.md#apis-v2-link)[[Back to README]](../../../../README.md)
[[Back to Model list]](../../../../README.md#models-v2-link)[[Back to API list]](../../../../README.md#apis-v2-link)[[Back to README]](../../../../README.md)
[[Back to Model list]](../../../../README.md#models-v2-link)[[Back to API list]](../../../../README.md#apis-v2-link)[[Back to README]](../../../../README.md)
Copy file name to clipboardExpand all lines: docs/v2/LanguageModels/AnthropicModel.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -41,7 +41,7 @@ anthropic_model_model_id = None
41
41
# int | The maximum number of tokens to generate before stopping.
42
42
max_tokens =None
43
43
# List[AnthropicMessage] | Input messages to the model. This can include a single user-role message or multiple messages with alternating user and assistant roles.
44
-
messages = [{"role": "user"}]
44
+
messages = [{"role": "USER"}]
45
45
# Optional[PreviewMode] | Enables the use of preview functionality.
46
46
preview =None
47
47
# Optional[List[str]] | Custom text sequences that will cause the model to stop generating.
Copy file name to clipboardExpand all lines: docs/v2/LanguageModels/models/AnthropicMediaType.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,10 +4,10 @@ AnthropicMediaType
4
4
5
5
|**Value**|
6
6
| --------- |
7
-
|`"image_jpeg"`|
8
-
|`"image_png"`|
9
-
|`"image_gif"`|
10
-
|`"image_webp"`|
7
+
|`"IMAGE_JPEG"`|
8
+
|`"IMAGE_PNG"`|
9
+
|`"IMAGE_GIF"`|
10
+
|`"IMAGE_WEBP"`|
11
11
12
12
13
13
[[Back to Model list]](../../../../README.md#models-v2-link)[[Back to API list]](../../../../README.md#apis-v2-link)[[Back to README]](../../../../README.md)
0 commit comments