Who we are
Lightricks is an AI-first company with a mission to bridge the gap between imagination and creation. At our core is LTX-2, an open-source generative video model, built to deliver expressive, high-fidelity video at unmatched speed. It powers both our own products and a growing ecosystem of partners through api integration.
Our flagship products include Facetune and LTX Studio, used by hundreds of millions of users worldwide. We combine deep research, user-first design, and end-to-end execution to bring the future of expression to all.
What you will be doing
As an ML Software Engineer with a focus on low-level and CUDA-based optimizations, you will play a key role in shaping the design, performance, and scalability of Lightricks’ machine learning inference systems. You’ll work on deeply technical challenges at the intersection of GPU acceleration, systems architecture, and ML deployment.
Your expertise in CUDA, C/C++, and performance tuning will be crucial in enhancing runtime efficiency across heterogeneous computing environments. You’ll collaborate with designers, researchers, and backend engineers to build production-grade ML pipelines that are optimized for latency, throughput, and memory use, contributing directly to the infrastructure powering Lightricks' next-generation AI products.
This role is ideal for an engineer with strong systems-level thinking, deep familiarity with GPU internals, and a passion for pushing the boundaries of performance and efficiency in machine learning infrastructure.
Responsibilities
- Design and implement highly optimized GPU-accelerated ML inference systems using CUDA and low-level parallelism techniques
- Optimize memory, compute, and data flow to meet real-time or high-throughput constraints
- Improve the performance, reliability, and observability of our inference backend across diverse compute targets (CPU/GPU)
- Collaborate with cross-functional teams (including researchers, developers, and designers) to deliver efficient and scalable inference solutions
- Contribute to ComfyUI and internal infrastructure to improve usability and performance of model execution flows
- Investigate performance bottlenecks at all levels of the stack—from Python to kernel-level execution
- Navigate and enhance a large, complex, production-grade codebase
- Drive innovation in low-level system design to support future ML workloads
Your Skills And Experience
- 5+ years of experience in high-performance software engineering
- Advanced proficiency in CUDA, C/C++, and Python, especially in production environments
- Deep understanding of GPU architecture, memory hierarchies, and optimization techniques
- Proven track record of optimizing compute-intensive systems
- Strong system architecture fundamentals, especially around performance, concurrency, and parallelism
- Ability to independently lead deep technical investigations and deliver clean, maintainable solutions
- Collaborative and team-oriented mindset, with experience working across functional teams
Preferred Requirements
- Experience with low-level profiling and debugging tools (e.g., Nsight, perf, gdb, VTune)
- Familiarity with machine learning frameworks (e.g., PyTorch, TensorRT, ONNX Runtime)
- Contributions to performance-critical open-source or ML infrastructure projects
- Experience with cloud infrastructure and GPU scheduling at scale
Why Join Us
We’re here to push the boundaries of what’s possible with AI and video - not for the buzz, but for the craft, the challenge, and the chance to make something genuinely new.
We believe in an environment where people are encouraged to think, create, and explore. Real impact happens when people are empowered to experiment, evolve, and elevate together. At Lightricks, every breakthrough starts with great people and a collaborative mindset. If you're looking for a place that combines deep tech, creative energy, and zero buzzword culture, you might be in the right place.
We got you covered:
- We run daily door-to-door shuttles, offering Car-to-go subscriptions for several locations in central Israel, plus free parking and train-station pickups.
- We’re proud to have 2 chef-led restaurants on site by the legendary Machneyuda Group (yes, that Machneyuda!), plus a bakery nestled in the heart of our office, filled daily with the scent of fresh pastries.
- We empower employees with cutting-edge tools and learning opportunities to grow and succeed through workshops, access and training on platforms, subscriptions, and clear guidelines for responsible AI use.