The future of AI is inference
With the rise of agentic workflows and reasoning models, enterprises now need 100x more compute and 10x more throughput to run state-of-the-art AI models. Building robust, scalable inference systems has become a top priority—but it's also a major bottleneck, requiring deep expertise in low-level systems, snapshotters, Kubernetes, and more.
Tensorfuse removes this complexity by helping teams run serverless GPUs in their own AWS account. Just bring:
Your code and env as Dockerfile
Your AWS account with GPU capacity We handle the rest—deploying, managing, and autoscaling your GPU containers on production-grade infrastructure. Teams use Tensorfuse for:
Developer Experience: 10x better than current solutions, helping you move fast and save months of engineering time.
Seamless Autoscaling: Spin up 100s of GPUs in real time with startup times as low as 30 seconds (vs. 8–10 mins with custom setups).
Security: All workloads run inside your own AWS VPC—your data never leaves your account.
Production Features: Native CI/CD integration, custom domain support, volume mounting, and more—out of the box. We’re building the runtime layer for AI-native companies. Join us.
Building scalable infra for AI inference.
Architecting and implementing backend services in Go and Python
Extending our Kubernetes operators to optimise them for AI workloads
Optimizing container startup times with custom snapshotters written in Go
Ensuring high reliability of our infrastructure on AWS
Ensuring infrastructure level security of AI workloads.
Designing custom load balancing algorithms for LLMs, video and bio models in production.
2-3 years of professional experience with Go and / or Kubernetes
Familiarity with IaC tools like Cloudformation
You are methodical in debugging complex systems by diving deeper into code.
Philosophy: You will be working on a hard technical problem in an emerging market which requires committing at least two years to the problem. You will be an ideal fit if you are willing to take that risk.
Experience with GPU workloads and ML inference (vLLM, SGLang etc) is a plus.
Experience with working at a startup (between Seed to Series C stage). This is an in person role at our office in Bangalore. We’re an early stage company which means that the role requires working hard and moving quickly. Please only apply if that excites you.
Full-time
$102K–$118K
Bengaluru, India
Other opportunities you might be interested in