Google

pixel-8a-review—the-best-deal-in-smartphones

Pixel 8a review—The best deal in smartphones

bang for your buck —

It’s still $500, with a better screen, longer support, and the same great camera.

  • The Pixel 8a and its speedy 120 Hz display.

    Ron Amadeo

  • The back is a micro-textured plastic—and it’s very blue.

    Ron Amadeo

  • The aluminum camera bar houses the same cameras as last year.

    Ron Amadeo

  • The camera bar just barely sticks out from the back.

    Ron Amadeo

  • The bottom has a USB-C port.

    Ron Amadeo

  • The usual power and volume buttons on this side.

    Ron Amadeo

  • Compared to the Pixel 7a (right), you get rounded corners and brighter colors.

    Ron Amadeo

  • The home screen.

    Ron Amadeo

SPECS AT A GLANCE: Pixel 8a
SCREEN 6.1-inch, 120 Hz, 2400×1080 OLED
OS Android 14
CPU Google Tensor G3

One 3.0 GHz Cortex-X3 core

Four 2.45 GHz Cortex-A715 cores

Four 2.15 GHz Cortex-A510 Cores

GPU ARM Mali-G715
RAM 8GB
STORAGE 128GB, UFS 3.1
BATTERY 4492 mAh
NETWORKING Wi-Fi 6E, Bluetooth 5.3, GPS, NFC
PORTS USB Type-C 3.1 Gen 1 with 18 W USB-PD 3.0 charging
CAMERA 64MP main camera, 13 MP Ultrawide, 13 MP front camera
SIZE 152.1×72.7×8.9 mm
WEIGHT 188 g
STARTING PRICE $499.99
OTHER PERKS IP67 dust and water resistance, eSIM, in-screen fingerprint reader, 5 W wireless charging

Somehow, Google’s midrange phone just keeps getting better. The Pixel 8a improves on many things over the Pixel 7a—it has a better display, a longer support cycle, and the usual yearly CPU upgrades, all at the same $499 price as last year. Who could complain? The Pixel A series was already the best bargain in smartphones, and there’s now very little difference between it and a flagship-class device.

Year over year, the 6.1-inch, 2400×1080 display is being upgraded from 90 Hz to 120 Hz, giving you essentially the same experience you’d get on the “flagship” Pixels. The SoC is the same processor you’d get in the Pixel 9, a Google Tensor G3. That’s a 4 nm chip with one Arm Cortex X3, four Cortex A715 cores, four Cortex A510 cores, and a Mali G715 GPU.

Previously, the 120 Hz display was the primary thing A-series owners were missing out on compared to the more expensive Pixels, so its addition is a huge deal. Any comparison between the “midrange” Pixel 8a and the “flagship” 6.2-inch Pixel 8 will now just be splitting hairs. The flagship gets an extra 0.1 inches of display, 2 percent more battery, and Wi-Fi 6E instead of Wi-Fi 7. The cameras are technically newer, but since they all run the same image-stacking software, the images look very similar. Are those things worth an extra $200? No, they are not.

It looks like the Pixel 9 will end up with a third model, a small “Pro” model with three cameras, presumably to try to put some distance between the flagship Pixels and the A series. The company will still ship a base-model Pixel 9 that will probably be consumed by the Pixel 9a, but if they cut that model, that could arrive at a cohesive lineup.

The bezels are a bit bigger than what we generally see on a flagship, but they're fine.

Enlarge / The bezels are a bit bigger than what we generally see on a flagship, but they’re fine.

Ron Amadeo

The other big upgrade is seven years of update support. Just like on the Pixel 8, you’ll now get monthly security updates and major Android OS updates for a whopping seven years, the longest promise in the industry. Whether a phone like this can actually run what would presumably be “Android 21” in 2031 is anyone’s guess. Previous Pixel models have often felt like they were being prematurely killed due to the short three-year update window.

When planning this far ahead, you most likely do have to pick one situation or the other as hardware ages: Do you want the phone to die a premature death due to software limits or have a years-longer life cycle with an eventual software update that runs poorly because the hardware is too slow? Even if running Android 21 is a challenge for this little midranger, the phone will at least be allowed to die a natural death due to actual hardware limitations. Google won’t be creating e-waste due to an artificial software timeline; the phone will actually be e-waste, and now users will be allowed to squeeze as much life as possible out of it. That’s a good thing!

Over those seven years, don’t be surprised if the Pixel 8a misses out on some of Google’s future AI features. The phone has 8GB of RAM, and Google is already saying that’s not enough for some of the AI features it wants to make in the future. The company says something like a generative AI-powered smart reply system would require keeping models in memory 24/7, and that would cut down on memory available for the OS, so phones need more memory. One leaked Pixel 9 model had 16GB of RAM, which is wild.

I’m someone who views a phone as a piece of hardware for running apps and just wants the OS to be simple and clean and get out of the way. I don’t need any of Google’s generative AI features, so I don’t consider this a big loss at all. Hopefully, future builds of Android allow you to turn AI features off. If Google actually wants to support these 8GB of RAM devices for seven years, it will need to do that.

Pixel 8a review—The best deal in smartphones Read More »

openai-revs-up-plans-for-web-search,-but-denies-report-of-an-imminent-launch

OpenAI revs up plans for web search, but denies report of an imminent launch

High-tech drama —

The search.chatgpt.com URL is being set up, and Google employees are being poached.

Updated

OpenAI revs up plans for web search, but denies report of an imminent launch

Aurich Lawson | Getty Images

OpenAI is eventually coming for the most popular website on the Internet: Google Search. A Reuters report claimed that the company behind ChatGPT is planning to launch a search engine as early as this Monday, but OpenAI denied that Monday would be the day.

The company recently confirmed it’s holding a livestream event on Monday, though, but an OpenAI rep told Ars that “Despite reports, we’re not launching a search product or GPT-5 on Monday.” Either way, Monday is an interesting time for an OpenAI livestream. That’s the day before Google’s biggest show of the year, Google I/O, where Google will primarily want to show off its AI prowess and convince people that it is not being left in the dust by OpenAI. Google seeing its biggest search competition in years and suddenly having to face down “OpenAI’s Google Killer” would have definitely cast a shadow over the show.

OpenAI has been inching toward a search engine for a while now. It has been working with Microsoft with a “Bing Chat” generative-AI search engine in Microsoft’s search engine. Earlier this week, The Verge reported that “OpenAI has been aggressively trying to poach Google employees” for an upstart search team. “Search.chatgpt.com” is already being set up on OpenAI’s server, so it’s all falling into place.

When the launch of OpenAI search does happen, one thing to really watch for is what types of search queries it can handle. Right now, ChatGPT can really only replace search queries that are questions. The primary reason Google Search is the No. 1 most visited website is not answers to questions; it’s because many people use Google as their gateway to the Internet. Here’s a list of the top Google queries; most of them are not questions, they are other websites. The No. 1 Google query is “youtube,”  No. 2 is “facebook.” Also in the top 10 are “amazon,” “gmail,” “instagram,” and “whatsapp web,” and truly sad is No. 16, where people go to Google and search for “google” to get the homepage. That link is a top-100 list, and I don’t think I would categorize a single one as a “question.”

These navigational queries are the bread and butter of a search engine, not answers. If you want to make a purchase or reservation, view a map, download software, do research, watch a video, or read news, these are all things Google can help with that chatbots typically can’t. These types of navigational queries used to be the only thing Google could do—the obsession with answers only happened around 2012 when Google was 14 years old. Before then, it was all about the 10 blue links.

So, exactly how much of a threat OpenAI is to the big incumbent will depend on how much of Google it’s actually able to replace. One attempt at a chatbot search engine is Perplexity AI, where the primary text on the page shouts “Ask questions, trust the answers.” Again there’s that “answer” obsession, and that’s not how Google built its fortune. If you type “youtube” into Perplexity, it doesn’t show a link to YouTube; it shows a Wikipedia-style response.

Earlier this week, Bloomberg reported a few details on the forthcoming OpenAI search engine and said it would “allow users to ask ChatGPT a question and receive answers that use details from the web with citations to sources such as Wikipedia entries and blog posts” and that “one version of the product also uses images alongside written responses to questions.” That does not quite sound like a Google Killer and other than the source citation, it does not sound all that different from the ChatGPT of today.

The other issue is speed. Google became the gateway to the Internet by being really fast. Google is so proud of its speed that it actually displays the time it takes for a response to be generated, and it’s always fractions of a second. Generative AI, on the other hand is probably best measured in “words per minute,” like you’re judging a typist. When you just want to do something simple or go somewhere, sitting there for several seconds waiting for text to slowly appear word by word can be agonizing. My last Google Search took 0.41 seconds to “generate” a full page of text. How long will OpenAI’s search engine take?

This article was updated to reflect OpenAI’s denial of the search engine launch day reports.

OpenAI revs up plans for web search, but denies report of an imminent launch Read More »

google-patches-its-fifth-zero-day-vulnerability-of-the-year-in-chrome

Google patches its fifth zero-day vulnerability of the year in Chrome

MEMORY WANTS TO BE FREE —

Exploit code for critical “use-after-free” bug is circulating in the wild.

Extreme close-up photograph of finger above Chrome icon on smartphone.

Google has updated its Chrome browser to patch a high-severity zero-day vulnerability that allows attackers to execute malicious code on end user devices. The fix marks the fifth time this year the company has updated the browser to protect users from an existing malicious exploit.

The vulnerability, tracked as CVE-2024-4671, is a “use after free,” a class of bug that occurs in C-based programming languages. In these languages, developers must allocate memory space needed to run certain applications or operations. They do this by using “pointers” that store the memory addresses where the required data will reside. Because this space is finite, memory locations should be deallocated once the application or operation no longer needs it.

Use-after-free bugs occur when the app or process fails to clear the pointer after freeing the memory location. In some cases, the pointer to the freed memory is used again and points to a new memory location storing malicious shellcode planted by an attacker’s exploit, a condition that will result in the execution of this code.

On Thursday, Google said an anonymous source notified it of the vulnerability. The vulnerability carries a severity rating of 8.8 out of 10. In response, Google said, it would be releasing versions 124.0.6367.201/.202 for macOS and Windows and 124.0.6367.201 for Linux in subsequent days.

“Google is aware that an exploit for CVE-2024-4671 exists in the wild,” the company said.

Google didn’t provide any other details about the exploit, such as what platforms were targeted, who was behind the exploit, or what they were using it for.

Counting this latest vulnerability, Google has fixed five zero-days in Chrome so far this year. Three of the previous ones were used by researchers in the Pwn-to-Own exploit contest. The remaining one was for a vulnerability for which an exploit was available in the wild.

Chrome automatically updates when new releases become available. Users can force the update or confirm they’re running the latest version by going to Settings > About Chrome and checking the version and, if needed, clicking on the Relaunch button.

Google patches its fifth zero-day vulnerability of the year in Chrome Read More »

deepmind-adds-a-diffusion-engine-to-latest-protein-folding-software

DeepMind adds a diffusion engine to latest protein-folding software

Added complexity —

Major under-the-hood changes let AlphaFold handle protein-DNA complexes and more.

image of a complicated mix of lines and ribbons arranged in a complicated 3D structure.

Enlarge / Prediction of the structure of a coronavirus Spike protein from a virus that causes the common cold.

Google DeepMind

Most of the activities that go on inside cells—the activities that keep us living, breathing, thinking animals—are handled by proteins. They allow cells to communicate with each other, run a cell’s basic metabolism, and help convert the information stored in DNA into even more proteins. And all of that depends on the ability of the protein’s string of amino acids to fold up into a complicated yet specific three-dimensional shape that enables it to function.

Up until this decade, understanding that 3D shape meant purifying the protein and subjecting it to a time- and labor-intensive process to determine its structure. But that changed with the work of DeepMind, one of Google’s AI divisions, which released Alpha Fold in 2021, and a similar academic effort shortly afterward. The software wasn’t perfect; it struggled with larger proteins and didn’t offer high-confidence solutions for every protein. But many of its predictions turned out to be remarkably accurate.

Even so, these structures only told half of the story. To function, almost every protein has to interact with something else—other proteins, DNA, chemicals, membranes, and more. And, while the initial version of AlphaFold could handle some protein-protein interactions, the rest remained black boxes. Today, DeepMind is announcing the availability of version 3 of AlphaFold, which has seen parts of its underlying engine either heavily modified or replaced entirely. Thanks to these changes, the software now handles various additional protein interactions and modifications.

Changing parts

The original AlphaFold relied on two underlying software functions. One of those took evolutionary limits on a protein into account. By looking at the same protein in multiple species, you can get a sense for which parts are always the same, and therefore likely to be central to its function. That centrality implies that they’re always likely to be in the same location and orientation in the protein’s structure. To do this, the original AlphaFold found as many versions of a protein as it could and lined up their sequences to look for the portions that showed little variation.

Doing so, however, is computationally expensive since the more proteins you line up, the more constraints you have to resolve. In the new version, the AlphaFold team still identified multiple related proteins but switched to largely performing alignments using pairs of protein sequences from within the set of related ones. This probably isn’t as information-rich as a multi-alignment, but it’s far more computationally efficient, and the lost information doesn’t appear to be critical to figuring out protein structures.

Using these alignments, a separate software module figured out the spatial relationships among pairs of amino acids within the target protein. Those relationships were then translated into spatial coordinates for each atom by code that took into account some of the physical properties of amino acids, like which portions of an amino acid could rotate relative to others, etc.

In AlphaFold 3, the prediction of atomic positions is handled by a diffusion module, which is trained by being given both a known structure and versions of that structure where noise (in the form of shifting the positions of some atoms) has been added. This allows the diffusion module to take the inexact locations described by relative positions and convert them into exact predictions of the location of every atom in the protein. It doesn’t need to be told the physical properties of amino acids, because it can figure out what they normally do by looking at enough structures.

(DeepMind had to train on two different levels of noise to get the diffusion module to work: one in which the locations of atoms were shifted while the general structure was left intact and a second where the noise involved shifting the large-scale structure of the protein, thus affecting the location of lots of atoms.)

During training, the team found that it took about 20,000 instances of protein structures for AlphaFold 3 to get about 97 percent of a set of test structures right. By 60,000 instances, it started getting protein-protein interfaces correct at that frequency, too. And, critically, it started getting proteins complexed with other molecules right, as well.

DeepMind adds a diffusion engine to latest protein-folding software Read More »

ai-in-space:-karpathy-suggests-ai-chatbots-as-interstellar-messengers-to-alien-civilizations

AI in space: Karpathy suggests AI chatbots as interstellar messengers to alien civilizations

The new golden record —

Andrej Karpathy muses about sending a LLM binary that could “wake up” and answer questions.

Close shot of Cosmonaut astronaut dressed in a gold jumpsuit and helmet, illuminated by blue and red lights, holding a laptop, looking up.

On Thursday, renowned AI researcher Andrej Karpathy, formerly of OpenAI and Tesla, tweeted a lighthearted proposal that large language models (LLMs) like the one that runs ChatGPT could one day be modified to operate in or be transmitted to space, potentially to communicate with extraterrestrial life. He said the idea was “just for fun,” but with his influential profile in the field, the idea may inspire others in the future.

Karpathy’s bona fides in AI almost speak for themselves, receiving a PhD from Stanford under computer scientist Dr. Fei-Fei Li in 2015. He then became one of the founding members of OpenAI as a research scientist, then served as senior director of AI at Tesla between 2017 and 2022. In 2023, Karpathy rejoined OpenAI for a year, leaving this past February. He’s posted several highly regarded tutorials covering AI concepts on YouTube, and whenever he talks about AI, people listen.

Most recently, Karpathy has been working on a project called “llm.c” that implements the training process for OpenAI’s 2019 GPT-2 LLM in pure C, dramatically speeding up the process and demonstrating that working with LLMs doesn’t necessarily require complex development environments. The project’s streamlined approach and concise codebase sparked Karpathy’s imagination.

“My library llm.c is written in pure C, a very well-known, low-level systems language where you have direct control over the program,” Karpathy told Ars. “This is in contrast to typical deep learning libraries for training these models, which are written in large, complex code bases. So it is an advantage of llm.c that it is very small and simple, and hence much easier to certify as Space-safe.”

Our AI ambassador

In his playful thought experiment (titled “Clearly LLMs must one day run in Space”), Karpathy suggested a two-step plan where, initially, the code for LLMs would be adapted to meet rigorous safety standards, akin to “The Power of 10 Rules” adopted by NASA for space-bound software.

This first part he deemed serious: “We harden llm.c to pass the NASA code standards and style guides, certifying that the code is super safe, safe enough to run in Space,” he wrote in his X post. “LLM training/inference in principle should be super safe – it is just one fixed array of floats, and a single, bounded, well-defined loop of dynamics over it. There is no need for memory to grow or shrink in undefined ways, for recursion, or anything like that.”

That’s important because when software is sent into space, it must operate under strict safety and reliability standards. Karpathy suggests that his code, llm.c, likely meets these requirements because it is designed with simplicity and predictability at its core.

In step 2, once this LLM was deemed safe for space conditions, it could theoretically be used as our AI ambassador in space, similar to historic initiatives like the Arecibo message (a radio message sent from Earth to the Messier 13 globular cluster in 1974) and Voyager’s Golden Record (two identical gold records sent on the two Voyager spacecraft in 1977). The idea is to package the “weights” of an LLM—essentially the model’s learned parameters—into a binary file that could then “wake up” and interact with any potential alien technology that might decipher it.

“I envision it as a sci-fi possibility and something interesting to think about,” he told Ars. “The idea that it is not us that might travel to stars but our AI representatives. Or that the same could be true of other species.”

AI in space: Karpathy suggests AI chatbots as interstellar messengers to alien civilizations Read More »

judge-mulls-sanctions-over-google’s-“shocking”-destruction-of-internal-chats

Judge mulls sanctions over Google’s “shocking” destruction of internal chats

Kenneth Dintzer, litigator for the US Department of Justice, exits federal court in Washington, DC, on September 20, 2023, during the antitrust trial to determine if Alphabet Inc.'s Google maintains a monopoly in the online search business.

Enlarge / Kenneth Dintzer, litigator for the US Department of Justice, exits federal court in Washington, DC, on September 20, 2023, during the antitrust trial to determine if Alphabet Inc.’s Google maintains a monopoly in the online search business.

Near the end of the second day of closing arguments in the Google monopoly trial, US district judge Amit Mehta weighed whether sanctions were warranted over what the US Department of Justice described as Google’s “routine, regular, and normal destruction” of evidence.

Google was accused of enacting a policy instructing employees to turn chat history off by default when discussing sensitive topics, including Google’s revenue-sharing and mobile application distribution agreements. These agreements, the DOJ and state attorneys general argued, work to maintain Google’s monopoly over search.

According to the DOJ, Google destroyed potentially hundreds of thousands of chat sessions not just during their investigation but also during litigation. Google only stopped the practice after the DOJ discovered the policy. DOJ’s attorney Kenneth Dintzer told Mehta Friday that the DOJ believed the court should “conclude that communicating with history off shows anti-competitive intent to hide information because they knew they were violating antitrust law.”

Mehta at least agreed that “Google’s document retention policy leaves a lot to be desired,” expressing shock and surprise that a large company like Google would ever enact such a policy as best practice.

Google’s attorney Colette Connor told Mehta that the DOJ should have been aware of Google’s policy long before the DOJ challenged the conduct. Google had explicitly disclosed the policy to Texas’ attorney general, who was involved in DOJ’s antitrust suit over both Google’s search and adtech businesses, Connor said.

Connor also argued that Google’s conduct wasn’t sanctionable because there is no evidence that any of the missing chats would’ve shed any new light on the case. Mehta challenged this somewhat, telling Connor, “We just want to know what we don’t know. We don’t know if there was a treasure trove of material that was destroyed.”

During rebuttal, Dintzer told Mehta that Google’s decision to tell Texas about the policy but not the federal government did not satisfy their disclosure obligation under federal rules of civil procedure in the case. That rule says that “only upon finding that the party acted with the intent to deprive another party of the information’s use in the litigation may” the court “presume that the lost information was unfavorable to the party.”

The DOJ has asked the court to make that ruling and issue four orders sanctioning Google. They want the court to order the “presumption that deleted chats were unfavorable,” the “presumption that Google’s proffered justification” for deleting chats “is pretextual” (concealing Google’s true rationale), and the “presumption that Google intended” to delete chats to “maintain its monopoly.” The government also wants a “prohibition on argument by Google that the absence of evidence is evidence of adverse inference,” which would stop Google from arguing that the DOJ is just assuming the deleted chats are unfavorable to Google.

Mehta asked Connor if she would agree that, at “minimum,” it was “negligent” of Google to leave it to employees to preserve chats on sensitive discussions, but Connor disagreed. She argued that “given the typical use of chat,” Google’s history-off policy was “reasonable.”

Connor told Mehta that the DOJ must prove that Google intended to hide evidence for the court to order sanctions.

That intent could be demonstrated another way, Mehta suggested, recalling that “Google has been very deliberate in advising employees about what to say and what not to say” in discussions that could indicate monopolistic behaviors. That included telling employees, “Don’t use the term markets,” Mehta told Connor, asking if that kind of conduct could be interpreted as Google’s intent to hide evidence.

But Connor disagreed again.

“No, we don’t think you can use it as evidence,” Connor said. “It’s not relevant to the claims in this case.”

But during rebuttal, Dintzer argued that there was evidence of its relevance. He said that testimony from Google employees showed that Google’s chat policy “was uniformly used as a way of communicating without creating discoverable information” intentionally to hide the alleged antitrust violations.

Judge mulls sanctions over Google’s “shocking” destruction of internal chats Read More »

apple-deal-could-have-been-“suicide”-for-google,-company-lawyer-says

Apple deal could have been “suicide” for Google, company lawyer says

Woulda coulda shoulda? —

Judge: What should Google have done to avoid the DOJ’s crosshairs?

John Schmidtlein, partner at Williams & Connolly LLP and lead litigator for Alphabet Inc.'s Google, arrives to federal court in Washington, DC, US, on Monday, Oct. 2, 2023.

Enlarge / John Schmidtlein, partner at Williams & Connolly LLP and lead litigator for Alphabet Inc.’s Google, arrives to federal court in Washington, DC, US, on Monday, Oct. 2, 2023.

Halfway through the first day of closing arguments in the Department of Justice’s big antitrust trial against Google, US District Judge Amit Mehta posed the question that likely many Google users have pondered over years of DOJ claims that Google’s market dominance has harmed users.

“What should Google have done to remain outside the crosshairs of the DOJ?” Mehta asked plaintiffs halfway through the first of two full days of closing arguments.

According to the DOJ and state attorneys general suing, Google has diminished search quality everywhere online, primarily by locking rivals out of default positions on devices and in browsers. By paying billions for default placements that the government has argued allowed Google to hoard traffic and profits, Google allegedly made it nearly impossible for rivals to secure enough traffic to compete, ultimately decreasing competition and innovation in search by limiting the number of viable search engines in the market.

The DOJ’s lead litigator, Kenneth Dintzer, told Mehta that what Google should have done was acknowledge that the search giant had an enormous market share and consider its duties more carefully under antitrust law. Instead, Dintzer alleged, Google chose the route of “hiding” and “destroying documents” because it was aware of conflicts with antitrust law.

“What should Google have done?” Dintzer told Mehta. “They should have recognized that by demanding locking down every default that they were opening themselves up to a challenge on the conduct.”

The most controversial default agreement that Google has made is a 21-year deal with Apple that Mehta has described as the “heart” of the government’s case against Google. During the trial, a witness accidentally blurted out Google’s carefully guarded secret of just how highly it values the Apple deal, revealing that Google pays 36 percent of its search advertising revenue from Safari just to remain the default search tool in Apple’s browser. In 2022 alone, trial documents revealed that Google paid Apple $20 billion for the deal, Bloomberg reported.

That’s in stark contrast to the 12 percent of revenue that Android manufacturers get from their default deals with Google. The government wants the court to consider all these default deals to be anti-competitive, with Dintzer suggesting during closing arguments that they are the “centerpiece” of “a lot” of Google’s exclusionary behavior that ultimately allowed Google to become the best search engine today—by “capturing the default and preventing rivals from getting access to those defaults.”

Google’s lawyers have argued that Google succeeds on its merits. Today, lead litigator John Schmidtlein repeatedly pointed out that antitrust law is designed to protect the competitive process, not specific competitors who fail to invest and innovate—as Microsoft did by failing to recognize how crucial mobile search would become.

“Merely getting advantages by winning on quality, they may have an effect on a rival, but the question is, does it have an anti-competitive effect?” Schmidtlein argued, noting that the DOJ hadn’t “shown that absent the agreements, Microsoft would have toppled Google.”

But Dintzer argued that “a mistake by one rival doesn’t mean that Google gets to monopolize this market forever.” When asked to explain why everyone—including some of Google’s rivals—testified that Google won contracts purely because it was the best search engine, Dintzer warned Mehta that the fact that Google’s rivals “may be happy cashing Google’s checks doesn’t tell us anything.”

According to Schmidtlein, Google could have crossed the line with the Apple deal, but it didn’t.

“Google didn’t go on to say to Apple, if you don’t make us the default, no Google search on Apple devices at all,” Schmidtlein argued. “That would be suicide for Google.”

It’s still unclear how Mehta may be leaning in this case, interrogating both sides with care and making it clear that he expects all his biggest questions to be answered after closing arguments conclude Friday evening.

But Mehta did suggest at one point today that it seemed potentially “impossible” for anyone to compete with Google for default placements.

“How would anybody be able to spend billions and billions of dollars to possibly dislodge Google?” Mehta asked. “Is there any real competition for the default spot?”

According to Schmidtlein, that is precisely what “competition on the merits” looks like.

“Google is winning because it’s better, and Apple is deciding Google is better for users,” Schmidtlein argued. “The antitrust laws are not designed to ensure a competitive market. They’re designed to ensure a competitive process.”

Proving the potential anti-competitive effects of Google’s default agreements, particularly the Apple deal, has long been regarded as the most critical point in order to win the government’s case. So it’s no surprise that the attorney representing state attorneys general, Bill Cavanaugh, praised Mehta for asking, “What should Google have done?” According to Cavanaugh, that was the “right question” to pose in this trial.

“What should they have done 10 years ago when there was a recognition” that “we’re monopolists” and “we have substantial control in markets” is ask, “How should we proceed with our contracts?” Cavanaugh argued. “That’s the question that they answered, but they answered it in the wrong way.”

Seemingly if Google’s default contracts posed fewer exclusionary concerns, the government seems to be arguing, there would be more competition and therefore more investment and innovation in search. But as long as Google controls the general search market, the government alleged that users won’t be able to search the web the way that they want.

Google is hoping that Mehta will reject the government’s theories and instead rule that Google has done nothing to stop rivals from improving the search landscape. Early in the day, Mehta told the DOJ that he was “struggling to see” how Google has either stopped innovating or degraded its search engine as a result of lack of competition.

Closing arguments continue on Friday. Mehta is not expected to rule until late summer or early fall.

Apple deal could have been “suicide” for Google, company lawyer says Read More »

email-microsoft-didn’t-want-seen-reveals-rushed-decision-to-invest-in-openai

Email Microsoft didn’t want seen reveals rushed decision to invest in OpenAI

I’ve made a huge mistake —

Microsoft CTO made a “mistake” dismissing Google’s AI as a “game-playing stunt.”

Email Microsoft didn’t want seen reveals rushed decision to invest in OpenAI

In mid-June 2019, Microsoft co-founder Bill Gates and CEO Satya Nadella received a rude awakening in an email warning that Google had officially gotten too far ahead on AI and that Microsoft may never catch up without investing in OpenAI.

With the subject line “Thoughts on OpenAI,” the email came from Microsoft’s chief technology officer, Kevin Scott, who is also the company’s executive vice president of AI. In it, Scott said that he was “very, very worried” that he had made “a mistake” by dismissing Google’s initial AI efforts as a “game-playing stunt.”

It turned out, Scott suggested, that instead of goofing around, Google had been building critical AI infrastructure that was already paying off, according to a competitive analysis of Google’s products that Scott said showed that Google was competing even more effectively in search. Scott realized that while Google was already moving on to production for “larger scale, more interesting” AI models, it might take Microsoft “multiple years” before it could even attempt to compete with Google.

As just one example, Scott warned, “their auto-complete in Gmail, which is especially useful in the mobile app, is getting scarily good.”

Microsoft had tried to keep this internal email hidden, but late Tuesday it was made public as part of the US Justice Department’s antitrust trial over Google’s alleged search monopoly. The email was initially sealed because Microsoft argued that it contained confidential business information, but The New York Times intervened to get it unsealed, arguing that Microsoft’s privacy interests did not outweigh the need for public disclosure.

In an order unsealing the email among other documents requested by The Times, US District Judge Amit Mehta allowed to be redacted some of the “sensitive statements in the email concerning Microsoft’s business strategies that weigh against disclosure”—which included basically all of Scott’s “thoughts on OpenAI.” But other statements “should be disclosed because they shed light on Google’s defense concerning relative investments by Google and Microsoft in search,” Mehta wrote.

At the trial, Google sought to convince Mehta that Microsoft, for example, had failed to significantly invest in mobile early on, giving Google a competitive advantage in mobile search that it still enjoys today. Scott’s email seems to suggest that Microsoft was similarly dragging its feet on investing in AI until Scott’s wakeup call.

Nadella’s response to the email was immediate. He promptly forwarded the email to Microsoft’s chief financial officer, Amy Hood, on the same day that he received it. Scott’s “very good email,” Nadella told Hood, explained “why I want us to do this.” By “this,” Nadella presumably meant exploring investment opportunities in OpenAI.

Mere weeks later, Microsoft had invested $1 billion into OpenAI, and there have been billions more invested since through an extended partnership agreement. In 2024, the two companies’ finances appeared so intertwined that the European Union suspected Microsoft was quietly controlling OpenAI and began investigating whether the companies still operate independently. Ultimately, the EU dismissed the probe, deciding that Microsoft’s $13 billion in investments did not amount to an acquisition, Reuters reported.

Officially, Microsoft has said that its OpenAI partnership was formed “to accelerate AI breakthroughs to ensure these benefits are broadly shared with the world”—not to keep up with Google.

But at the Google trial, Nadella testified about the email, saying that partnering with companies like OpenAI ensured that Microsoft could continue innovating in search, as well as in other Microsoft services.

On the stand, Nadella also admitted that he had overhyped AI-powered Bing as potentially shaking up the search market, backing up the DOJ by testifying that in Silicon Valley, Internet search is “the biggest no-fly zone.” Even after partnering with OpenAI, Nadella said that for Microsoft to compete with Google in search, there are “limits to how much artificial intelligence can reshape the market as it exists today.”

During the Google trial, the DOJ argued that Google’s alleged search market dominance had hindered OpenAI’s efforts to innovate, too. “OpenAI’s ChatGPT and other innovations may have been released years ago if Google hadn’t monopolized the search market,” the DOJ argued, according to a Bloomberg report.

Closing arguments in the Google trial start tomorrow, with two days of final remarks scheduled, during which Mehta will have ample opportunity to ask lawyers on both sides the rest of his biggest remaining questions.

It’s somewhat obvious what Google will argue. Google has spent years defending its search business as competing on the merits—essentially arguing that Google dominates search simply because it’s the best search engine.

Yesterday, the US district court also unsealed Google’s proposed legal conclusions, which suggest that Mehta should reject all of the DOJ’s monopoly claims, partly due to the government’s allegedly “fatally flawed” market definitions. Throughout the trial, Google has maintained that the US government has failed to show that Google has a monopoly in any market.

According to Google, even its allegedly anticompetitive default browser agreement with Apple—which Mehta deemed the “heart” of the DOJ’s monopoly case—is not proof of monopoly powers. Rather, Google insisted, default browser agreements benefit competition by providing another avenue through which its rivals can compete.

The DOJ hopes to prove Google wrong, arguing that Google has gone to great lengths to block rivals from default placements and hide evidence of its alleged monopoly—including training employees to avoid using words that monopolists use.

Mehta has not yet disclosed when to expect his ruling, but it could come late this summer or early fall, AP News reported.

If Google loses, the search giant may be forced to change its business practices or potentially even break up its business. Nobody knows what that would entail, but when the trial started, a coalition of 20 civil society and advocacy groups recommended some potentially drastic remedies, including the “separation of various Google products from parent company Alphabet, including breakouts of Google Chrome, Android, Waze, or Google’s artificial intelligence lab Deepmind.”

Email Microsoft didn’t want seen reveals rushed decision to invest in OpenAI Read More »

critics-question-tech-heavy-lineup-of-new-homeland-security-ai-safety-board

Critics question tech-heavy lineup of new Homeland Security AI safety board

Adventures in 21st century regulation —

CEO-heavy board to tackle elusive AI safety concept and apply it to US infrastructure.

A modified photo of a 1956 scientist carefully bottling

On Friday, the US Department of Homeland Security announced the formation of an Artificial Intelligence Safety and Security Board that consists of 22 members pulled from the tech industry, government, academia, and civil rights organizations. But given the nebulous nature of the term “AI,” which can apply to a broad spectrum of computer technology, it’s unclear if this group will even be able to agree on what exactly they are safeguarding us from.

President Biden directed DHS Secretary Alejandro Mayorkas to establish the board, which will meet for the first time in early May and subsequently on a quarterly basis.

The fundamental assumption posed by the board’s existence, and reflected in Biden’s AI executive order from October, is that AI is an inherently risky technology and that American citizens and businesses need to be protected from its misuse. Along those lines, the goal of the group is to help guard against foreign adversaries using AI to disrupt US infrastructure; develop recommendations to ensure the safe adoption of AI tech into transportation, energy, and Internet services; foster cross-sector collaboration between government and businesses; and create a forum where AI leaders to share information on AI security risks with the DHS.

It’s worth noting that the ill-defined nature of the term “Artificial Intelligence” does the new board no favors regarding scope and focus. AI can mean many different things: It can power a chatbot, fly an airplane, control the ghosts in Pac-Man, regulate the temperature of a nuclear reactor, or play a great game of chess. It can be all those things and more, and since many of those applications of AI work very differently, there’s no guarantee any two people on the board will be thinking about the same type of AI.

This confusion is reflected in the quotes provided by the DHS press release from new board members, some of whom are already talking about different types of AI. While OpenAI, Microsoft, and Anthropic are monetizing generative AI systems like ChatGPT based on large language models (LLMs), Ed Bastian, the CEO of Delta Air Lines, refers to entirely different classes of machine learning when he says, “By driving innovative tools like crew resourcing and turbulence prediction, AI is already making significant contributions to the reliability of our nation’s air travel system.”

So, defining the scope of what AI exactly means—and which applications of AI are new or dangerous—might be one of the key challenges for the new board.

A roundtable of Big Tech CEOs attracts criticism

For the inaugural meeting of the AI Safety and Security Board, the DHS selected a tech industry-heavy group, populated with CEOs of four major AI vendors (Sam Altman of OpenAI, Satya Nadella of Microsoft, Sundar Pichai of Alphabet, and Dario Amodei of Anthopic), CEO Jensen Huang of top AI chipmaker Nvidia, and representatives from other major tech companies like IBM, Adobe, Amazon, Cisco, and AMD. There are also reps from big aerospace and aviation: Northrop Grumman and Delta Air Lines.

Upon reading the announcement, some critics took issue with the board composition. On LinkedIn, founder of The Distributed AI Research Institute (DAIR) Timnit Gebru especially criticized OpenAI’s presence on the board and wrote, “I’ve now seen the full list and it is hilarious. Foxes guarding the hen house is an understatement.”

Critics question tech-heavy lineup of new Homeland Security AI safety board Read More »

android-tv-has-access-to-your-entire-account—but-google-is-changing-that

Android TV has access to your entire account—but Google is changing that

It’s all just Android —

Should sideloading Chrome on an old smart TV really compromise your entire account?

Android TV has access to your entire account—but Google is changing that

Google

Google says it has patched a nasty loophole in the Android TV account security system, which would grant attackers with physical access to your device access to your entire Google account just by sideloading some apps. As 404 Media reports, the issue was originally brought to Google’s attention by US Sen. Ron Wyden (D-Ore.) as part of a “review of the privacy practices of streaming TV technology providers.” Google originally told the senator that the issue was expected behavior but, after media coverage, decided to change its stance and issue some kind of patch.

“My office is mid-way through a review of the privacy practices of streaming TV technology providers,” Wyden told 404 Media. “As part of that inquiry, my staff discovered an alarming video in which a YouTuber demonstrated how with 15 minutes of unsupervised access to an Android TV set-top box, a criminal could get access to private emails of the Gmail user who set up the TV.”

The video in question was a PSA from YouTuber Cameron Gray, and it shows that grabbing any Android TV device and sideloading a few apps will grant access to the current Google account. This is obvious if you know how Android works, but it’s not obvious to most users looking at a limited TV interface.

The heart of the issue is how Android treats your Google account. Since the OS started on phones, every Android device starts with the assumption that it is a private, one-person device. Google has built on top of that feature with multiuser support and guest accounts, but these aren’t part of the default setup flow, can be hard to find, and are probably disabled on many Android TV boxes. The result is that signing in to an Android TV device often gives it access to your entire Google account.

Android has a centralized Google account system shared by a million Google-centric background and syncing processes, the Play Store, and nearly all Google apps. When you boot an Android device for the first time, the guided setup asks for a Google account, which is expected to live on the device forever as the owner’s primary account. Any new Google app you add to your device automatically gets access to this central Google account repository, so if you set up the phone and then install Google Keep, Keep automatically gets signed in and gains access to your notes. During the initial setup, where you might install 10 different apps that use a Google account, it would be annoying to enter your username and password over and over again.

This centralized account system is hungry for Google accounts, so any Google account you use to sign in to any Google app gets sucked into the central account system, even if you decline the initial setup. A common annoyance is to have a Google Workspace account at work, then sign into Gmail for work email and then have to deal with this useless work account showing up in the Play Store, Maps, Photos, etc.

For TVs, this presents a unique gotcha because, while you will still be forced to log in to download something from the Play Store, it’s not obvious to the user that you’re granting this device access to your entire Google account—including to potentially sensitive things like location history, emails, and messages. To the average user, a TV device just shows “TV stuff” like your YouTube recommendations and a few TV-specific Play Store apps, so you might not consider it to be a high-sensitivity sign-in. But if you just sideload a few more Google apps, you can get access to anything. Further confusing matters is Google’s OAuth strategy, which teaches users that there are things like scoped access to a Google account on third-party devices or sites, but Android does not work that way.

In the video, Gray simply grabs an Android TV device, goes to a third-party Android app site, then sideloads Chrome. Chrome automatically signs in to the TV owner’s Google account and has access to all passwords and cookies, which means access to Gmail, Photos, Chat history, Drive files, YouTube accounts, AdSense, any site that allows for Google sign-in, and partial credit card info. It’s all available in Chrome without any security checks. Individual apps like Gmail and Google Photos would immediately start working, too.

As Gray’s video points out, Android TV devices can be dongles, set-top boxes, or code installed right into a TV. In businesses and hotels, they can be semi-public devices. It’s also not hard to imagine a TV device falling into the hands of someone else. You might not worry too much about forgetting a $30 Chromecast in a hotel room, or you might sign in to a hotel TV and forget to delete your account, or you might throw out a TV and not think twice about what account it’s signed in to. If an attacker gets access to any of these devices later, it’s trivial to unlock your entire Google account.

Google says it has fixed this problem, though it doesn’t explain how. The company’s statement to 404 says, “Most Google TV devices running the latest versions of software already do not allow this depicted behavior. We are in the process of rolling out a fix to the rest of the devices. As a best security practice, we always advise users to update their devices to the latest software.”

Many Android TV devices, especially those built-in to TV sets, are abandonware and run an old version of the software, but Google’s account system is updatable via the Play Store, so there’s a good chance a fix can roll out to most devices.

Android TV has access to your entire account—but Google is changing that Read More »

google-can’t-quit-third-party-cookies—delays-shut-down-for-a-third-time

Google can’t quit third-party cookies—delays shut down for a third time

This post was written in Firefox —

Google says UK regulator testing means the advertising tech will last until 2025.

Extreme close-up photograph of finger above Chrome icon on smartphone.

Will Chrome, the world’s most popular browser, ever kill third-party cookies? Apple and Mozilla both killed off the user-tracking technology in 2020. Google, the world’s largest advertising company, originally said it wouldn’t kill third-party cookies until 2022. Then in 2021, it delayed the change until 2023. In 2022, it delayed everything again, until 2024. It’s 2024 now, and guess what? Another delay. Now Google says it won’t turn off third-party cookies until 2025, five years after the competition.

A new blog post cites UK regulations as the reason for the delay, saying, “We recognize that there are ongoing challenges related to reconciling divergent feedback from the industry, regulators and developers, and will continue to engage closely with the entire ecosystem.” The post comes as part of the quarterly reports the company is producing with the UK’s Competition and Markets Authority (CMA).

Interestingly, the UK’s CMA isn’t concerned about user privacy but instead is worried about other web advertisers that compete with Google. The UK wants to make sure that Google isn’t making changes to Chrome to prop up its advertising business at the expense of competitors. While other browser vendors shut down third-party cookies without a second thought, Google said it wouldn’t turn off the user-tracking feature until it built an alternative advertising feature directly into Chrome, so it can track user interests to serve them relevant ads. The new advertising system, called the Topics API and “Privacy Sandbox,” launched in Chrome in 2023. Google AdSense is already compatible.

The UK is worried that Chrome’s new ad system might give Google’s ad division an unfair advantage. Google and the UK CMA are talking it out, and Google says it’s “critical that the CMA has sufficient time to review all evidence, including results from industry tests, which the CMA has asked market participants to provide by the end of June.” Google has a public testing suite for Chrome’s new ad system to allow for feedback. Given all the testing data that needs to be pored over, Google says, “We will not complete third-party cookie deprecation during the second half of Q4.” We’ll check back next year!

Google can’t quit third-party cookies—delays shut down for a third time Read More »

first-real-life-pixel-9-pro-pictures-leak,-and-it-has-16gb-of-ram

First real-life Pixel 9 Pro pictures leak, and it has 16GB of RAM

OK, but what if I don’t care about generative AI? —

With 16GB of RAM, there’s lot of room for Google’s AI models to live in memory.

OnLeak's renders of the <a href='https://www.mysmartprice.com/gear/pixel-9-pro-5k-renders-360-degree-video-exclusive/'>Pixel 9 Pro XL</a>, the <a href='https://www.91mobiles.com/hub/google-pixel-9-design-render-exclusive/'>Pixel 9 Pro</a>, and the <a href = 'https://www.91mobiles.com/hub/google-pixel-9-renders-design-exclusive/'>Pixel 9.</a>” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/03/pixel-9-lineup-800×446.jpg”></img><figcaption>
<div>
<p><a data-height=Enlarge / OnLeak’s renders of the Pixel 9 Pro XL, the Pixel 9 Pro, and the Pixel 9.

OnLeaks / 91Mobiles / MySmartPrice

The usual timeline would put the Google Pixel 9 at something like five months away from launching, but that doesn’t mean it’s too early to leak! Real-life pictures of the “Pixel 9 Pro” model have landed over at Rozetked.

This prototype looks just like the renders from OnLeaks that first came out back in January. The biggest change is a new pill-shaped camera bump instead of the edge-to-edge design of old models. It looks rather stylish in real-life photos, with the rounded corners of the pill and camera glass matching the body shape. The matte back looks like it still uses the excellent “soft-touch glass” material from last year. The front and back of the phone are totally flat, with a metal band around the side. The top edge still has a signal window cut out of it, which is usually for mmWave. The Pixel 8 Pro’s near-useless temperature sensor appears to still be on the back of this prototype. At least, the spot for the temperature sensor—the silver disk right below the LED camera flash—looks identical to the Pixel 8 Pro. As a prototype any of this could change before the final release, but this is what it looks like right now.

The phone was helpfully photographed next to an iPhone 14 Pro Max, and you might notice that the Pixel 9 Pro looks a little small! That’s because this is one of the small models, with only a 6.1-inch display. Previously for Pixels, “Pro” meant “the big model,” but this year Google is supposedly shipping three models, adding in a top-tier small phone. There’s the usual big Pixel 9, with a 6.7-inch display, which will reportedly be called the “Pixel 9 Pro XL.” The new model is the “Pixel 9 Pro”—no XL—which is a small model but still with all the “Pro” trimmings, like three rear cameras. There’s also the Pixel 9 base model, which is the usual smaller phone (6.03-inch) with cut-down specs like only two rear cameras.

Rozetked.” data-height=”1056″ data-width=”1408″ href=”https://cdn.arstechnica.net/wp-content/uploads/2024/04/4.jpg”>The Pixel 9 Pro prototype. It's small because this is the Rozetked.” height=”735″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/04/4-980×735.jpg” width=”980″>

Enlarge / The Pixel 9 Pro prototype. It’s small because this is the “small Pro” model. There are more pictures over at Rozetked.

Rozetked says (through translation) that the phone is  “similar in size to the iPhone 15 Pro.” It runs a Tensor G4 SoC, of course, and—here’s a noteworthy spec—has a whopping 16GB of RAM according to the bootloader screen. The Pixel 8 Pro tops out at 12GB.

Anything could change between prototype and product, especially for RAM, which is usually scaled up and down in various phone tiers. A jump in RAM is something we were expecting though. As part of Google’s new AI-focused era, it wants generative AI models turned on 24/7 for some use cases. Google said as much in a recent in-house podcast, pointing to some features like a new version of Smart Reply built right into the keyboard, which “requires the models to be RAM-resident”—in other words, loaded all the time. Google’s desire to keep generative AI models in memory means less RAM for your operating system to actually do operating system things, and one solution to that is to just add more RAM. So how much RAM is enough? At one point Google said the smaller Pixel 8’s 8GB of RAM was too much of a “hardware limitation” for this approach. Google PR also recently told us the company still hasn’t enabled generative AI smart reply on Pixel 8 Pro by default with its 12GB of RAM, so expect these RAM numbers to start shooting up.

The downside is that more RAM means a more expensive phone, but this is the path Google is going down. There’s also the issue of whether or not you view generative AI as something that is so incredibly useful you need it built into your keyboard 24/7. Google wants its hardware to be “the intersection of hardware, software, and AI,” so keeping all this ChatGPT-like stuff quarantined to a single app apparently won’t be an option.

One final note: It’s weird how normal this phone looks. Usually, Pixel prototypes have a unique logo that isn’t the Google “G,” and often they are covered in identifying patterns for leak tracing. This looks like a production-worthy design, though.

First real-life Pixel 9 Pro pictures leak, and it has 16GB of RAM Read More »