A few disruptive players in the financial services scene came out to a gathering held last Thursday at Google’s offices in the Chelsea neighborhood of New York City. It was all part of an event organized by C2C Global, a community of Google Cloud users, to bring together business leaders and experts to discuss not only how they are making use of the cloud, but the possibilities of training machine learning models at the edge.
There was a fireside chat between executives from DoiT and Current, as well as a panel discussion that also included AMD, Data Capital Management, and was moderated by Google Cloud’s Arsho Toubi, customer engineer.
In his fireside chat with Spenser Paul, head of global alliances and programs with DoiT, CTO Trevor Marshall talked up how his company, challenger bank Current, used incentives AWS offered initially but then jumped to Google Cloud when a new opportunity arose.
“What we’re tackling is the problem of access to financial services for most people,” Marshall said. Current has worked in the cloud from its inception, he said. “For us, cloud spend is the product. We don’t have partners operating the key workloads.”
When Current was founded in 2015, the company deployed AWS workloads, Marshall said, thanks to receiving $100,000 in credits from AWS. “We blew through those and then Google gave us $100,000 in credits and so we hopped over there,” he said. “I like to say we came for the credits but stayed for Kubernetes.”
By 2018, Current went through a restructuring of the way it worked, Marshall said, with all of its workloads moving to Kubernetes with Google Kubernetes Engine (GKE) as the full control plan.
Current saw demand for its services pick up with the rise of the pandemic, he said. “COVID, for us, was a huge accelerator of customers onto the platform. People needed financial services to get stimulus checks and other things but couldn’t go into a branch.” The reluctance of banking with an online company evaporated overnight, Marshall said. “Once we started ramping, we had to really start thinking through horizontal scaling on all of our workloads.” That led to conversations to make more use of resources such as DoiT and Google Cloud, he said.
But diving into more cloud-based services and launching new products is not cost free, which can be a sticking point for companies on razor-thin budgets. “How do you decide whether you need to move faster to launch a new product even if it does cost more in the immediate future because launching that new product will make up for that in the long term?” asked Paul.
Marshall said that is why companies use cloud. “You need to be able to just put stuff out there. It’s way more valuable to us to launch products than it is to save money in the short term,” he said. Launching a product earlier than expected might attract many more customers, Marshall said. “That’s worth spending twice as much, because the cloud spend makes up a relatively small piece of what it costs for us to serve customers. You don’t need to do capacity planning for features.”
Working in the cloud and also being willing to change providers also seemed to fit the ambitions of Michael Beal, CEO of Data Capital Management (DCM), an artificial intelligence investment manager. “When you’re looking at your physical infrastructure and you want to step up to doubling or quadrupling your data workloads some time in the future, you have to pay for those boxes today,” he said.
An expatriate from JPMorgan Chase, Beal is familiar with the balancing act of cost, outcomes, and expectations. Innovation meant significant, upfront capital expenditures, he said, which in the past forced him to create big projections to try to support that. “A year or two years later, you’re never actually going to meet it and in big company, you’re just going to end up getting everything chopped at the waist,” Beal said.
DCM sought elastic scalability, he said, to allow the company to build as it wanted to focus on the software and get access to the compute. Then the company could scale horizontally as it grew. “That really made sense to us,” Beal said. From the start, he said he and his colleagues foresaw the cloud would be all about the access to hardware and the horizontal scalability.
Initially, DCM went with AWS, Beal said, but he also thought about moving beyond just access to cheap storage on to gaining access to the service layer that wraps around that. “At the time, we really were painfully focused on proprietarily doing everything,” he said. When Google Cloud approached DCM last year, Beal said, it fit his company’s trajectory and desire for modularity.
The advent of tensor processing units (TPUs) from Google, he said, also opened the door to new possibilities. DCM had been using regular CPUs for a lot of its workflows, Beal said, which included simple ensemble models and supervised learning that did not need that much shared memory — at least at first. “As data has really caught up to the innovations that we were doing, there’s a lot more use cases that we can do,” he said. “We’ve been pushing much more into the high-performance computing space. It really was the TPUs that brought us over here [to Google Cloud]. That is the future of what we’re doing.”
Beal said DCM also wants to move on from using one core AI model. “Some of the work that I’ve seen Google pushing around federated learning, and what I see you’re going to be doing around edge compute,” he said, “I really think that’s going to be really exciting for what I want to solve for, which is creating very specific robots that learn your preferences, learn what you care about, train at our core, but then are really doing the federated learning at the edge.”