Martian, Joint Interview with Co-CEOs Etan Ginsberg and Shriyash Upadhyay

Martian Co-CEOs Etan Ginsberg and Shriyash Upadhyay

Martian, a San Francisco, CA-based company providing an AI based orchestration layer solution, has just raised $9M in funding. In conjunction with the funding, Co-CEOs Etan Ginsberg and Shriyash Upadhyay answered our questions about the company, the solution they offer, the funding and future plans.

FinSMEs: Hi Yash and Etan, can you tell us a bit more about yourselves? What’s your background?

We were previously doing AI researchers at the University of Pennsylvania, where we met in 2020. We’ve previously founded and exited NLP companies in the 2010s.

FinSMEs: Let’s speak about Martian. What is the market problem you want to solve? What is the real opportunity?

Today, we’re solving the problem of finding the best models for your application. 

It’s becoming easier and easier to create language models – the cost of compute is going down, algorithms are becoming more efficient, and more open source tools are available to create these models. As a result, more companies and developers are creating custom models trained on custom data. As these models have different costs and capabilities, you can get better performance by using multiple models, but it’s difficult to test them all and to find the right ones to use. We take care of that for developers.

This is part of our broader mission to understand AI.

Each time we improve our fundamental understanding of models, it results in a paradigm shift for AI. Fine-tuning was the paradigm driven by understanding outputs. Prompting is the paradigm driven by understanding inputs. That single difference in our understanding of models is much of what differentiates traditional ML (“let’s train a regressor”) and modern generative AI (“let’s prompt a baby AGI”).

Our goal is to consistently deliver such breakthroughs until AI is fully understood and we have a theory of intelligence as robust as our theories of logic or calculus.

In the words of Sir Francis Bacon, “Knowledge is power”. Accordingly, the best way to be sure that we understand AI is to release powerful tools. In our opinion, a model router is a tool of that kind. We’re excited to build it, grow it, and put it in people’s hands.

FinSMEs: What are the features differentiating the product from competitors?

We invented model routing, and we’re the first to market. That means the alternatives for most companies will be sticking with a single model or trying to build a routing system in house. Routing can provide some really awesome results – some of our customers have seen a ~12x decrease in cost, and we’re able to outperform GPT-4 on OpenAI’s own evals (openai/evals on github). So companies who are only using a single model are missing a competitive advantage.

As for others who might try to build a router? It’s easy to make a router, but it’s hard to make a good router. How many routing systems are going to be able to outperform GPT-4? Our technology – which lets us gain a more fundamental understanding of how models operate – gives us the ability to route in a way that is more sophisticated and effective.

FinSMEs: You just raised a new funding round. Please, tell us more about it.

We’re excited to share some great news! We’ve raised $9M from some truly amazing partners – NEA, Prosus Ventures, CVP (Carya Venture Partners), and General Catalyst. It’s an honor to join forces with these venture firms, each bringing their unique expertise to our mission. Their investment is more than just financial support; it’s a partnership that opens up new opportunities for us. With this funding, we’re ready to make our routing product even better, reach out to more customers, and deepen our understanding of our models’ inner workings. This is a big step forward for Martian, and we can’t wait to see where this journey takes us.

FinSMEs: Can you share some numbers and achievements of the business?

We’re really excited about the performance we’ve seen from our router. We’ve been able to out-perform GPT-4 on OpenAI’s own evals (openai/evals on github). We’re able to perform as well as or better than GPT-4 on >88% of the tasks in openai/evals. And we don’t just outperform on metrics like accuracy; on average, the router is 20% less expensive than GPT-4 – even though we are routing solely based on accuracy and similar metrics in those experiments. On some tasks, we see greater than 30x reductions in cost. With some of our existing customers, we’ve seen cost reductions in production as large as 12x. And that’s even before counting the reduction in total cost of ownership for AI by providing a system that’s future-proofed and obviates model selection.

Plus, that’s just on our general purpose router. For enterprise clients, we’ve built custom routers. Some of those routers have been preferred over GPT-4 over 90% of the time when annotated according to human preferences.

FinSMEs: What are your medium-term plans?

There are two things we’re really excited to do.

The first is to work with the community to improve the model router; we’re already getting some pretty awesome results and we’re excited to see what we can do when partnering with developers and customers.

The second is to release additional tools that the community can use, based around model mapping. There’s a lot to learn about how models work and a lot we can do with those learnings.

FinSMEs

15/11/2023