First look: Inside Modyfi’s push to build the future of graphic design
First look: Inside Modyfi’s push to build the future of graphic design

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


Modyfi, a startup founded by Snap and Amazon alums, has launched a public beta of its AI-powered graphic design software and announced on Wednesday a $7 million funding round led by NEA. 

The newly-released app is browser-based and looks very similar to the dual-columned interface found in other graphic design applications like Adobe’s Creative Cloud software or Affinity’s suite of programs. The significant difference is found in the new AI-powered command field in the top middle of the UI, and the new approach to adaptive and context-aware pattern and placement tools.

The role of AI in the design space has captured the attention of the largest players, with OpenAI’s very first acquisition being Global Illumination Inc, a studio that builds “creative tools, infrastructure, and digital experiences,” as OpenAI wrote in the blog post announcing the acquisition. 

In an exclusive interview with VentureBeat, Modyfi cofounder Joseph Burfitt showed off how easy it was to create eye-catching designs, and explained the company’s vision and future plans. “The three key things that we care about the most [are the] graphic design suite, process and collaboration, and then the AI capabilities,” he said.

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

Register Now

Burfitt demonstrated how Modyfi allows designers to quickly conceptualize designs by dragging and dropping images, applying effects and modifiers with natural language commands, and generating variations using its “image-guided generation” model. The tool also enables real-time collaboration so multiple designers can work on a file simultaneously.

Burfitt explained that because familiarity is key, Modyfi was built with the kind of modern interface that design professionals expect to see. The new layer of AI capabilities is intended to reduce some of the ambiguity of the production process. “We also want to bring in elements where it enables the designer to remove more of that process [where] someone says, ‘Hey, can you make this image pop?’ What does [that] actually mean to someone like a graphic designer?

“So rather than going backwards and forwards with the client,” Burfitt said, “the AI within the chat window can just say ‘What do you mean by pop? Increase the vibrancy? Change the saturation?’”

From stealth mode to rapid scaling

Burfitt explained that Modyfi had been working in stealth mode for about 18 months to build out the application before launching the public beta. The company wanted to make sure the product was solid and reliable before widely promoting it, as losing users’ work was not an option.

“We haven’t [yet] massively gone wide right now, [as] we want to make sure that it’s a very solid work and performant. We have to win the trust of our users. We can never lose any kind of content which they create on the platform,” said Burfitt.

Now that the beta is available, Burfitt said Modyfi will start ramping up awareness and growing its user base. But it plans to do so gradually to maintain quality as more people sign up and start using the design tools.

Despite a growing user count, heavy compute requirements ended up less of a concern than they might have for Modyfi. Burfitt and his team were able to mitigate the application’s need for heavy computation requirements by deploying a distributed service that can leverage GPU resources from multiple cloud providers. This allowed them to scale effectively to meet demand.

“So when people are asleep in Japan, Australia, we can actually ship our processing overseas,” said Burfitt. This avoids capacity constraints in U.S.-based compute or GPU service availability, he explained.

He also mentioned that they are using a new web standard called WebGPU, which provides even better GPU performance within browsers than previous technologies like WebGL. By taking advantage of users’ own GPU acceleration, Modyfi is able to perform tasks like background removal — and soon, depth matching and upscaling — much faster when accessing the local hardware.

Funding to support further development

Looking ahead, Burfitt pointed to plans to expand Modyfi’s AI capabilities across different design styles, while keeping designers in control. He also emphasized the importance of collaboration and said the browser-based tool could evolve to be more conversational and intuitive over time.

On the business side, Burfitt said the $7 million in funding from NEA will primarily go towards further development and bringing on more engineers to tackle the complex challenges of building a graphic design platform. “Super excited to have them on. It’s incredible to have a caliber of VC like NEA who see our vision as much as we do. So [we’re] very, very excited to have them on and [we’ve] been utilizing that from a developer perspective.”

With early traction among top companies, Modyfi aims to push the boundaries of what’s possible at the intersection of design and AI.

“We’ve got thousands of people using this right now and hundreds of companies like Snapchat, Reddit, Stripe — the Nvidia creative team are even using it. So [we’re] pretty, pretty excited about the distribution so far,” said Burfitt.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Source