Replies: 1 comment
-
|
I've been working a lot on expression structure for composing pipelines. Something like this: (let* ((ctx (context))
(frame (frame ctx))
(mesh-buffer (buffer 'dynamic-mesh))
(indirect-buffer (buffer 'draw-commands))
(generate-geo
(pipeline
:stages (:compute "shaders/generate")
:writes mesh-buffer
:writes indirect-buffer))
(render-geo
(pipeline
:stages (:vertex "shaders/vertex"
:fragment "shaders/fragment")
:input mesh-buffer
:draw-indirect indirect-buffer)))
;; compute geometry
(bind-pipeline generate-geo)
(dispatch 128 1 1)
;; render geometry
(bind-pipeline render-geo)
(draw-indirect indirect-buffer))
Working BackwardsThe sum is easier than the parts here. Until we assemble a full pipeline, almost nothing is known. We don't know what buffers might be bound to a push constant or control structure for a particular invocation of a pipeline. When making a pipeline in isolation, type checking may require having concrete input handles. SpecsPartly for lifecycle, partly to enable the runtime dynamics and to let proc macros work well, it seems very likely to have a Because specs use names and Ids, not hard resources with vk handles, I think it will result in easier type checking and an easier time writing macros. The lifecycle makes use of specs a necessity, so it's not awkward at all. Finally, runtime mutation again demands that we can identify shared resources that serve similar purposes and crossfade from one render style to another. That decision almost assuredly gets made at runtime. Specs make it natural. Building Specs by HandUntil the big bad macros are ready, I'm going to have to build up smaller things and begin making their data more and more implicit wherever it is redundant. This is the way. This means declaring stages with structs for push constants and handles for buffers. Once composed into pipelines, these can be checked. Once compositions of pipelines is done by hand, it can be encoded into shorthand via macro. Staying PositiveRemember kids, ergonomics first and typed contracts second. The goal at this point is to get form A to Z by skipping some letters as long as the programs run and then begin backbuilding guarantees and compile time checks, enabling more complex Zs. Without any Zs, we have no idea where the pain is and will likely overoptimize. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
So, you have some pipelines in a command buffer being dispatched in a loop. Great. Now we want to begin using some of the intermediate outputs with draw commands from a different visual. Oh god.
Making money comes from making hard things less hard. Nobody ever gets paid to make easy things even easier. - Anymous High Templar
Inside Command Buffers
Meaningful transitions (changes from one kind of visual to another) will occur inside the command buffer because inter-command-buffer sync is too course to be useful. Commands can only ever be compatible with interleaving different pipelines from different visuals when the points of their interleaving have names, when their inputs and outputs are labelled and we can splice pipelines into the command buffer and insert any missing barriers.
Compatibility of Composition
The versatility of components such as pipelines have limitations. For example, if inputs cannot be combined into one buffer, during a transition we will just dispatch common pipelines twice with two sets of input. If some input doesn't have a normal map, we can't magically (well, machine learning can magically pretend we have one) send that input through a pipeline that requires a normal map.
Post-processing is an example of inherently reducing dataflow. Whatever happened upstream, the same downstream procedure "just works." There are numerous ways such composition can go wrong, but we are only going to concern ourselves with enabling some things to go right. Examples of easy hard things that can be made possible:
Automatic Interpolation
A UFO becomes a snow flake. Neither visual "knows" much about the other, but by naming and categorizing things that will kind of work, we can pull things off well enough to let a little machine learning handle the intermediates. An example would be to continue animating both a UFO and a snowflake according to each visual's independent procedures while machine learning interpolates the UFO onto the snowflake while both are in motion. Sounds hard. We will make it less hard.
Modulation Arguments
If we can just map transition from 0.0 to 1.0, any pipelines that understand this modulation can "just work." That is really quite simple and a very good, not bad at all way to do anything where a 0.0 to 1.0 change makes sense.
Named & Categorized Command Ordering
Ordering and input dependencies do go together, but knowing dependencies does not give us a semantically unique idea of where we can graft. The same dependencies can be read and written at multiple times. We have to pick which point to read, which point to write. Depending on how we order, they are not the same input, and we write not the same output.
Automating Interpretation of Names
Commands from multiple visuals will make it into one buffer. How they express in that buffer will first be modulated over time. Finally, the old commands will be unwoven from the control logic that enabled the initial weaving. Either by dyn interfaces or runtime switching, which amount to the same thing, we will zip together commands from two (and more) different visuals.
The likely trick for writing such code is the barbaric and shameful
callPkgsstyle abomination from Nix language:It makes sense here because pipelines absolutely know... well, a lot about their arguments. When we are dealing with named and categorized arguments that enable us to stitch together two command buffers, one command buffer is just an extension of that!
Conclusion
We're going to need semantically meaningful information to stitch together compatible pipelines and inputs within the same command buffer. This means named execution orders and named inputs that are shared from a common set of meaningful names. The names may have type information to avoid completely invalid combinations, but within types, names also enable automation of modulation. The key to achieving this is to begin using names to order commands within command buffers even when we just have a single visual. We want to use machine learning to modulate single visuals, and that architecture will go in the direction of modulating combinations of many visuals, both during transitions and during more persistant compositions.
Beta Was this translation helpful? Give feedback.
All reactions