Tag: side

  • Google AI Mode in Chrome Gets Side-by-side Browsing

    Google AI Mode in Chrome Gets Side-by-side Browsing

    The integration of artificial intelligence directly into the web browsing experience has reached a new milestone as Google announces a significant update to AI Mode within its Chrome desktop browser. This update introduces side-by-side page viewing and a revamped "plus" menu designed to streamline how users interact with digital information, effectively transforming the browser from a simple window into the internet into an active research assistant. By allowing users to maintain their AI-driven dialogue while simultaneously navigating external websites, Google is addressing one of the primary friction points in modern search: the need to constantly toggle between search results and the content itself.

    Enhancing the Multitasking Workflow with Side-by-Side Viewing

    The centerpiece of this update is the introduction of a native side-by-side rendering engine for AI Mode. Previously, when a user engaged with Chrome’s AI features—often triggered through the address bar or a dedicated panel—clicking on a link generated by the AI would navigate the user away from the conversation to a new tab or replace the current view. This "pogo-sticking" behavior often disrupted the flow of research, forcing users to remember their previous prompts or manually navigate back and forth to refine their queries based on what they had just read.

    Under the new system, clicking a link within the AI Mode panel now triggers a split-screen interface on the desktop version of Chrome. The destination webpage opens in a main window while the AI Mode panel remains pinned to the side. This architectural change allows for a continuous feedback loop. For example, a student researching a complex scientific topic can click on a source link provided by the AI; as the source page loads, they can immediately ask the AI to summarize a specific paragraph from that page or compare the new information with data previously discussed in the chat.

    Robby Stein, Vice President of Product for Google Search, and Mike Torres, Vice President of Product for Chrome, emphasized in a joint statement that these updates are part of a broader mission to make AI feel "native" to the browsing experience. By eliminating the barrier between the AI interface and the web content, Google is attempting to create a unified workspace that mirrors how professional researchers and power users actually operate.

    The New Plus Menu: Integrating Context and Multimodal Search

    In addition to the layout changes, Google has introduced a "plus" menu located within the Chrome search box on the New Tab page and inside the AI Mode interface. This feature is designed to solve the "context gap" that often limits the effectiveness of Large Language Models (LLMs). While standard AI chats often require users to copy and paste text or upload files manually, the new plus menu allows users to pull context directly from their active browsing session.

    The menu enables users to select recently opened tabs and add them as context for a specific search or query. This means that if a user has five different tabs open regarding travel destinations in Italy, they can use the plus menu to tell the AI to "summarize the common themes across these five tabs" without ever leaving the search interface. Furthermore, the menu supports the attachment of images and PDF files, allowing for a multimodal approach to information gathering.

    This update also relocates "Canvas" and image creation tools. Previously tucked away within specific AI sub-menus, these creative features are now accessible from any Chrome surface that displays the plus menu. This suggests that Google views AI not just as a tool for consumption and summarization, but as a persistent utility for creation that should be available regardless of what the user is currently viewing.

    A Chronology of Chrome’s AI Evolution

    The current update is the latest step in an aggressive timeline that Google has maintained since the beginning of 2024 to defend its search dominance against emerging AI-first competitors.

    • January 2024: Google introduced "experimental AI" features in Chrome M121, including a Tab Organizer and "Help me write," a feature designed to assist users in drafting text on the web.
    • May 2024: At the Google I/O developer conference, the company announced the integration of Gemini (formerly Bard) directly into the Chrome address bar (omnibox). This allowed users to type "@gemini" to start a conversation.
    • August 2024: Google expanded "Google Lens" capabilities within the desktop browser, allowing users to click and drag over any part of a website to search for visual elements without leaving the tab.
    • Late 2024/Early 2025: The rollout of "AI Mode" as a dedicated environment for deep research, which has now culminated in the current side-by-side and contextual updates.

    This progression shows a clear shift from "AI as a feature" (like a spell-checker) to "AI as the interface" (where the browser understands the user’s intent and surroundings).

    Strategic Implications and Market Context

    The decision to bake AI deeper into Chrome is a strategic necessity for Google. According to data from StatCounter, Google Chrome currently maintains a dominant market share of approximately 65% globally. However, Microsoft has been leveraging its own browser, Edge (which holds about 5% of the market), to aggressively push its "Copilot" AI. Edge has featured a sidebar AI for over a year, which provided many of the multitasking benefits that Google is only now standardizing in Chrome.

    By introducing side-by-side browsing, Google is closing a competitive gap with Microsoft Edge while leveraging its superior integration with the Google Search ecosystem. For Google, the browser is the primary gateway to its Search Generative Experience (SGE). If users find that AI-powered search is more efficient when conducted through a sidebar, Google must provide that experience to prevent users from migrating to Edge or specialized AI browsers like Arc or Brave.

    Industry analysts suggest that this move is also aimed at increasing the "stickiness" of the Chrome ecosystem. When a browser can analyze PDFs, summarize open tabs, and provide a persistent research assistant, the cost of switching to a different browser—where those contextual links might be lost—becomes much higher for the average user.

    Official Responses and User Privacy

    While the announcement from Stein and Torres focused on productivity and user experience, the rollout has prompted questions regarding data privacy and how the AI "reads" the user’s open tabs. Google has clarified that the context provided via the plus menu is user-initiated. The AI does not automatically ingest every tab the user has open; rather, it requires the user to specifically select which tabs or files should be used as context for a given prompt.

    This "opt-in context" model is a crucial distinction for corporate and privacy-conscious users who may have sensitive information open in other tabs. By requiring the use of the plus menu to "attach" a tab, Google maintains a layer of user control over what data is sent to the Gemini models for processing.

    Broader Impact on Digital Research and Education

    The implications of side-by-side AI browsing extend significantly into the sectors of education and professional research. For decades, the standard method of online research involved a fragmented workflow: searching, clicking a link, reading, taking notes in a separate document, and returning to the search engine.

    With the new AI Mode updates, the "notes" and the "search" are effectively merged. The AI panel acts as a living document that understands the source material the user is currently reading. This could fundamentally change how students interact with academic papers or how analysts process quarterly reports. The ability to attach a PDF and then browse related news sites in the side-by-side window allows for a level of cross-referencing that was previously impossible without a multi-monitor setup or complex window management.

    Furthermore, the multimodal nature of the plus menu—combining images, PDFs, and live tabs—suggests a future where search is no longer text-based. A user could upload a photo of a broken appliance part (via the plus menu) and have the AI search through open tabs of repair manuals to identify the specific replacement needed, all while keeping the manual visible in the side-by-side pane.

    Availability and Future Outlook

    The new updates to AI Mode in Chrome are currently rolling out to users in the United States. Google has confirmed that a global rollout to other regions and languages is planned for the coming months, though no specific dates have been provided for European or Asian markets.

    Looking ahead, the evolution of Chrome’s AI suggests that Google is moving toward an "Agentic" browser—one that doesn’t just find information, but can act upon it. As Gemini becomes more capable of understanding the structure of websites, future updates may allow the AI to not only summarize a page in the side-by-side view but also perform actions, such as filling out forms or navigating complex checkout processes based on the context of the user’s conversation.

    For now, the addition of side-by-side browsing and the contextual plus menu represents a significant refinement of the AI-powered web. It is a move that prioritizes the user’s workflow over the traditional "link-and-click" model of the internet, signaling a new era where the browser is as much a collaborator as it is a viewer.

Grafex Media
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.