Hi Miguel!
I’ve been looking into cog-rust, and I really like the core idea of using a lightweight and fast runtime for ML inference that’s suitable for real production use, not just experiments.
I’m currently building, an AI assistant that provides daily personalized recommendations (places, events, activities). Over time, low latency, stable inference, and the ability to move some logic into dedicated ML modules become important for us.
Do you think the ideas or approaches behind cog-rust could be applied or adapted to a product like this? I’d be very interested in your perspective specifically in the context of real-world AI services rather than demos.
Thanks!
Hi Miguel!
I’ve been looking into cog-rust, and I really like the core idea of using a lightweight and fast runtime for ML inference that’s suitable for real production use, not just experiments.
I’m currently building, an AI assistant that provides daily personalized recommendations (places, events, activities). Over time, low latency, stable inference, and the ability to move some logic into dedicated ML modules become important for us.
Do you think the ideas or approaches behind cog-rust could be applied or adapted to a product like this? I’d be very interested in your perspective specifically in the context of real-world AI services rather than demos.
Thanks!