Conversation
llmr/src/llmr/pipeline/__init__.py
Outdated
| from llmr.pipeline.action_dispatcher import ActionDispatcher, ActionHandler, WorldContext | ||
| from llmr.pipeline.entity_grounder import EntityGrounder, GroundingResult, ground_entity | ||
|
|
||
| __all__ = [ |
| def execute(self, schema: Any) -> Any: | ||
| """Ground entities, build partial designator, resolve, return action.""" | ||
|
|
||
| def _ground(self, description: EntityDescriptionSchema) -> GroundingResult: |
|
|
||
|
|
||
| @dataclass | ||
| class PickUpActionHandler(ActionHandler): |
There was a problem hiding this comment.
why do you have to make a handler for every action type that adds alot of code to be maintained. why not have a generic pipeline from LLM to Action.
|
|
||
|
|
||
| @dataclass | ||
| class PreconditionProvider(ABC): |
There was a problem hiding this comment.
These can be EQL-based, but that's also related to Jonas PR maybe have a look at it.
|
|
||
|
|
||
| @dataclass | ||
| class ExecutionState: |
There was a problem hiding this comment.
Do we have anything that preserves execution history in pycram? Probably, world states are relevant here as well. Such that this class is not needed or is a duplicate.
| _graph_cache: Dict[Tuple[int, Type[BaseModel]], Any] = {} | ||
|
|
||
|
|
||
| def _build_resolver_graph( |
There was a problem hiding this comment.
what is a resolver graph? resolver for what and why a graph?
| # ── Private intermediate schema ──────────────────────────────────────────────── | ||
|
|
||
|
|
||
| class _SlotFillerOutput(BaseModel): |
There was a problem hiding this comment.
what do you mean by a slot?
| ) | ||
|
|
||
|
|
||
| def _to_typed_schema(raw: _SlotFillerOutput) -> ActionSlotSchema: |
There was a problem hiding this comment.
not open closed, to add a new action for the LLM to handle one must modify this function
| raw_dict: dict = final_state["slot_schema"] | ||
| action_type = raw_dict.get("action_type") | ||
|
|
||
| if action_type == "PickUpAction": |
There was a problem hiding this comment.
same not open closed
| user's instruction, you must decide the remaining discrete parameters needed to | ||
| execute a PickUpAction. | ||
|
|
||
| ## PARAMETERS YOU DECIDE |
There was a problem hiding this comment.
Hard coded parameter names and possible values are not maintainable as the code changes, make these depend on the actual class parameters and enum classes, because the one that changes the original definitions, will have to fix it here as well and you do not need to maintain it yourself.
|
removed explicit sdt world dependencies to isolate this package operations independent. |
Added a brief README giving an overview of the llmr package functionality.
The pipeline takes a natural language instruction, decomposes it into atomic steps, and builds executable PyCRAM actions directly — covering pick-up and place actions for now.
Each step goes through slot filling (LLM classifies action + extracts parameters) → entity grounding (resolves object names to live world objects) → action dispatch, with automatic navigation and posture preconditions
prepended before each action. Failures are handled via LLM-driven replanning or graceful abort.
Requires an OPENAI_API_KEY. The LLM provider is configurable — Ollama is supported, which allows using open-source LLMs from a server.