Skip to main content

Perplexity’s new AI agent can perform multi-step tasks on your Android device

Running Perplexity on OnePlus Pad 2.
Nadeem Sarwar / Digital Trends

Perplexity announced Thursday that it is beginning to roll out an agentic AI for Android devices, called Perplexity Assistant, which will be able to independently take multi-step actions on behalf of its user.

“We are excited to launch the Perplexity Assistant to all Android users,” Perplexity CEO Aravind Srinivas wrote in a post to X on Thursday. “This marks the transition for Perplexity from an answer engine to a natively integrated assistant that can call other apps and perform basic tasks for you.”

Recommended Videos

The Assistant will be available through the Perplexity mobile app and will run atop the platforms existing “answer engine” model. As such, Assistant will have access to the internet. With it, users will be able to set reminders and future actions, much like ChatGPT‘s new Tasks feature offers. For example, the agent will be able to remind users of an upcoming event by automatically creating a calendar entry at the correct time and date.

Users can also use it to take more immediate action such as hailing a ride share or searching for a song, the company noted. The new feature can also access the user’s camera so you could, in theory, ask it to look for restaurants in your immediate area and then have it make reservations for you.

Perplexity Assistant is free to use as part of the mobile app and will initially be available in 15 languages, including English, Spanish, French, German, Japanese, Korean, and Hindi. How well it will interact with other agentic AIs on the device, such as Gemini or ChatGPT Tasks, remains to be seen.

Agents are the hot new thing in generative AI. These lightweight models are often “distilled” from larger LLMs like ChatGPT, Claude, or Gemini, but are tasked with  interpreting data and autonomously taking action rather than generating content. These actions can be straightforward, like automatically transcribing a Zoom call, or multi-step — think, having it plan an 8-course meal, shop for necessary ingredients on Instacart, then email invites to your guests.

The market is already being saturated with AI agents from the various leading companies. Anthropic kicked off the agentic race in November when it debuted its Computer Use API, which enables Claude to emulate human mouse and keyboard actions to control the local computing system.  Microsoft announced Copilot Actions the same month and began rolling the agents out to business and enterprise subscribers in January. Nvidia followed suit at CES 2025 when it revealed its new Nemotron family of LLMs, and OpenAI finally unveiled its AI agent, Operator, as a “research preview” just a few hours ago.

Andrew Tarantola
Former Digital Trends Contributor
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
I was struck by OpenAI’s new model — for all the wrong reasons
Announcement artwork for GPT 4.5 AI model

Sam Altman has shared a snippet from a new OpenAI model trained for creative writing. He says it's the first time he's been "struck" by something AI has written but the comments section is a total mess of extreme agreement and disagreement.

https://x.com/sama/status/1899535387435086115

Read more
Snapchat’s new lenses add AI videos to your Snaps at a steep fee
Snapchat Raccon AI video lens.

Snapchat is bringing generative AI videos to its social platform. The company has today introduced what it calls Video Gen AI Lenses, which essentially add an AI-powered video effect to your Snaps. The new video AI lenses are currently limited to Platinum subscription tier, which costs $14.99 per month.

“These Lenses, powered by our in-house built generative video model, bring some of the most cutting edge AI tools available today to Snapchatters through a familiar Lens format,” Snap says in a press release.

Read more
Gemini might soon drive futuristic robots that can do your chores
DIGIT sensors mounted on a robot hand manipulating glass marbles.

The inevitable outcome of artificial intelligence was always its use in robots, and that future might be closer than you think. Google today announced Gemini Robotics, an initiative to bring the world closer than ever to "truly general purpose robots."

Google says AI robotics have to meet three principal qualities. First, they should be able to adapt on the fly to different situations. They must be able to not only understand but also respond to changing environments. Finally, the robots have to be dexterous enough to perform the same kind of tasks that humans can with their hands and fingers.

Read more