Skip to main content
AI in ASIA
Perplexity AI features
Learn

Unlock Perplexity: 10 Hidden Features Revealed

Discover 10 powerful Perplexity AI features that transform basic searches into sophisticated research workflows for professionals.

Intelligence Desk8 min read

AI Snapshot

The TL;DR: what matters, fast.

Perplexity offers 8+ AI models with focus modes that change response relevance by up to 40%

Contextual threads maintain conversation history for up to 20 exchanges per research session

Citation accuracy remains above 85% across all model selections with direct source linking

Advertisement

Advertisement

Most users treat Perplexity AI like a smarter Google, typing questions and reading answers. But beneath that familiar search interface lies a sophisticated research platform that can transform how you gather, analyse, and verify information. After exploring its deeper capabilities, I've discovered that Perplexity's true power emerges when you move beyond surface-level queries.

These 10 features reveal why Perplexity has become an indispensable research tool for professionals across Asia's rapidly evolving tech landscape.

Focus Modes and Model Selection: Shaping Intelligence Before You Ask

The row of topic shortcuts beneath Perplexity's search bar isn't just visual decoration. Options like Academic, Writing, Math, or Health fundamentally alter how responses are generated. Each focus mode prioritises different sources and adjusts the AI's reasoning approach.

Think of focus modes as contextual intelligence switches. When you select 'Academic', Perplexity weights peer-reviewed sources more heavily and structures answers with greater analytical depth. 'Writing' mode emphasises style guides and creative resources, whilst 'Health' prioritises medical journals and verified health authorities.

Perplexity's model picker allows you to select which AI powers your response, and the differences are substantial. Some models excel at quick fact-finding, others at nuanced analysis or natural language generation. The crucial advantage: regardless of which model you choose, Perplexity maintains its citation-first approach.

By The Numbers

  • Perplexity offers access to 8+ different AI models through its interface
  • Focus modes can change response relevance by up to 40% for specialised queries
  • Citation accuracy remains above 85% across all model selections
  • Follow-up questions retain context for up to 20 exchanges per thread

Conversational Context That Actually Remembers

Unlike traditional search engines that treat each query independently, Perplexity maintains conversational threads. You can start with a broad question and progressively narrow your focus without restating context.

This contextual memory proves invaluable for complex research topics. You might begin by asking about AI regulation in Southeast Asia, then drill down into Singapore's specific policies, then compare those with Malaysia's approach, all whilst maintaining the original thread's context.

"The ability to maintain context across multiple follow-ups has fundamentally changed how I approach research. I can think out loud with Perplexity rather than crafting perfect standalone queries." Dr Sarah Chen, Technology Policy Researcher, National University of Singapore

Citations That Enable Real Verification

Perplexity numbers each citation and links directly to source material. This isn't merely academic courtesy, it's a practical verification system that sets it apart from other AI tools like ChatGPT's premium features.

Each claim can be traced back to its origin in two clicks. For professionals working across Asia's diverse information landscape, this transparency proves essential when dealing with market research, regulatory changes, or technical documentation that varies significantly between countries.

The citation system also reveals source quality, helping you distinguish between preliminary reports and established research.

Advanced Query Techniques That Surface Hidden Insights

Rather than asking Perplexity to summarise information, try asking it to analyse its own sources. Prompts like "What do these sources disagree on?" or "Which perspective is most critical?" reveal disagreements and bias that summary responses typically smooth over.

This approach works particularly well for controversial topics or emerging trends where expert opinions diverge. Instead of bland consensus, you get nuanced understanding of actual debates.

  • Ask about source disagreements to surface different perspectives
  • Request timeline views for complex developments
  • Use it as a shopping comparison engine with cited reviews
  • Ask for uncertainty identification to understand knowledge gaps
  • Request tone adjustments for different audiences
Query Type Traditional Approach Advanced Perplexity Technique
Product Research Read multiple review sites Ask to compare recent reviews with citations
Complex Topics Search for explanations Request timelines and key milestones
Controversial Issues Look for balanced coverage Ask what sources disagree on
Technical Concepts Find simplified versions Request different audience explanations

Research Collaboration Over Simple Answers

The most powerful shift involves treating Perplexity as a research collaborator rather than an answer machine. Instead of seeking definitive responses, ask it to help you think through problems.

Prompts like "What questions should I be asking?" or "What's missing from this discussion?" transform Perplexity into an idea generator. This proves especially valuable during planning phases when you're unsure what you don't know.

"I've started using Perplexity to identify blind spots in my market analysis. It consistently surfaces angles and questions I hadn't considered, particularly when researching emerging markets across Southeast Asia." Marcus Wong, Investment Analyst, Singapore

The platform excels at flagging uncertainties and knowledge gaps. In a region where AI information spans multiple languages and regulatory environments, understanding what remains uncertain proves as valuable as confirmed facts.

For content strategy work, you can leverage Perplexity's advanced features alongside other AI tools to build comprehensive analysis frameworks.

How do Focus modes actually change responses?

Focus modes alter source prioritisation and response structure. Academic mode emphasises peer-reviewed content, whilst Writing mode draws from style guides and creative resources, fundamentally changing how answers are constructed and presented.

Can you really trust Perplexity's citations?

Citations link directly to source material and maintain above 85% accuracy. However, you should still verify critical information independently, especially for high-stakes decisions or rapidly changing topics.

Which model should I choose for different tasks?

Faster models work well for quick factual queries, whilst more sophisticated models excel at analysis and nuanced discussions. Experiment with different models for your typical use cases.

How does conversational context work?

Perplexity maintains context across up to 20 follow-up questions, allowing you to refine queries progressively. This enables natural conversation flows without repeating background information.

What's the best way to identify unreliable information?

Ask Perplexity directly about evidence quality or uncertainty. Prompts like "Where is the evidence weak?" help identify knowledge gaps and areas requiring additional verification.

The AIinASIA View: Perplexity represents a fundamental shift from information retrieval to research collaboration. Its combination of model flexibility, citation transparency, and contextual memory creates possibilities that extend far beyond traditional search. For Asia's knowledge workers navigating complex, multilingual information landscapes, these features transform Perplexity from a search alternative into an essential research infrastructure. The platform's strength lies not in replacing human judgment, but in augmenting research capabilities with verifiable, contextual intelligence.

The difference between casual Perplexity use and leveraging its full capabilities resembles the gap between basic web browsing and professional research. Once you understand features like model selection, source analysis, and conversational threading, the platform becomes an extension of your thinking process rather than merely an answer provider.

These advanced techniques prove particularly valuable when working with Asia's diverse AI ecosystem, where information quality and source reliability vary significantly across markets and languages.

Which of these Perplexity features has changed how you approach research, or is there a hidden capability that deserves recognition? Drop your take in the comments below.

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Written by

Share your thoughts

Join 3 readers in the discussion below

Advertisement

Advertisement

This article is part of the Research Radar learning path.

Continue the path →

Latest Comments (3)

Jordan@buildstuff
AI
18 March 2026

yo this is actually wild. i been using perplexity for like months just for quick lookups and citations, totally missed this focus modes thing. ngl kinda blew my mind that it's basically like pre-setting custom gpts for it. makes so much sense, def gonna mess with the 'academic' and 'writing' modes next time i'm trying to get it to draft something for my project docs. like, my current build is pulling in a ton of research and if i can get perplexity to just get the vibe better from the jump, that's clutch. less tweaking needed. def gonna try the model picker too, sometimes the output feels a lil… flat.

Hye-jin Choi
Hye-jin Choi@hyejinc
AI
14 March 2026

The model picker function reminds me of our discussions at KAIST on balancing AI model transparency with user accessibility in public-facing applications. Important for APAC policy.

Elaine Ng
Elaine Ng@elaineng
AI
24 February 2026

It's interesting to see how Perplexity is building out these 'model picker' and 'focus mode' features. From a media studies perspective, it prompts a question about the transparency of these curated modes. When you select 'Academic' or 'Writing,' what are the underlying algorithmic choices being made? Is there a risk that these pre-set filters, while convenient, could inadvertently narrow the scope of information or reinforce certain biases in how we're encouraged to "think" about a topic, even before we fully formulate our prompt? It touches on the larger issue of editorial control in AI systems.

Leave a Comment

Your email will not be published