Author name: Kris Guyer

we’re-closer-to-re-creating-the-sounds-of-parasaurolophus

We’re closer to re-creating the sounds of Parasaurolophus

The duck-billed dinosaur Parasaurolophus is distinctive for its prominent crest, which some scientists have suggested served as a kind of resonating chamber to produce low-frequency sounds. Nobody really knows what Parasaurolophus sounded like, however. Hongjun Lin of New York University is trying to change that by constructing his own model of the dinosaur’s crest and its acoustical characteristics. Lin has not yet reproduced the call of Parasaurolophus, but he talked about his progress thus far at a virtual meeting of the Acoustical Society of America.

Lin was inspired in part by the dinosaur sounds featured in the Jurassic Park film franchise, which were a combination of sounds from other animals like baby whales and crocodiles. “I’ve been fascinated by giant animals ever since I was a kid. I’d spend hours reading books, watching movies, and imagining what it would be like if dinosaurs were still around today,” he said during a press briefing. “It wasn’t until college that I realized the sounds we hear in movies and shows—while mesmerizing—are completely fabricated using sounds from modern animals. That’s when I decided to dive deeper and explore what dinosaurs might have actually sounded like.”

A skull and partial skeleton of Parasaurolophus were first discovered in 1920 along the Red Deer River in Alberta, Canada, and another partial skull was discovered the following year in New Mexico. There are now three known species of Parasaurolophus; the name means “near crested lizard.” While no complete skeleton has yet been found, paleontologists have concluded that the adult dinosaur likely stood about 16 feet tall and weighed between 6,000 to 8,000 pounds. Parasaurolophus was an herbivore that could walk on all four legs while foraging for food but may have run on two legs.

It’s that distinctive crest that has most fascinated scientists over the last century, particularly its purpose. Past hypotheses have included its use as a snorkel or as a breathing tube while foraging for food; as an air trap to keep water out of the lungs; or as an air reservoir so the dinosaur could remain underwater for longer periods. Other scientists suggested the crest was designed to help move and support the head or perhaps used as a weapon while combating other Parasaurolophus. All of these, plus a few others, have largely been discredited.

We’re closer to re-creating the sounds of Parasaurolophus Read More »

android-will-soon-instantly-log-you-in-to-your-apps-on-new-devices

Android will soon instantly log you in to your apps on new devices

If you lose your iPhone or buy an upgrade, you could reasonably expect to be up and running after an hour, presuming you backed up your prior model. Your Apple stuff all comes over, sure, but most of your third-party apps will still be signed in.

Doing the same swap with an Android device is more akin to starting three-quarters fresh. After one or two Android phones, you learn to bake in an extra hour of rapid-fire logging in to all your apps. Password managers, or just using a Google account as your authentication, are a godsend.

That might change relatively soon, as Google has announced a new Restore Credentials feature, which should do what it says in the name. Android apps can “seamlessly onboard users to their accounts on a new device,” with the restore keys handled by Android’s native backup and restore process. The experience, says Google, is “delightful” and seamless. You can even get the same notifications on the new device as you were receiving on the old.

Android will soon instantly log you in to your apps on new devices Read More »

qubit-that-makes-most-errors-obvious-now-available-to-customers

Qubit that makes most errors obvious now available to customers


Can a small machine that makes error correction easier upend the market?

A graphic representation of the two resonance cavities that can hold photons, along with a channel that lets the photon move between them. Credit: Quantum Circuits

We’re nearing the end of the year, and there are typically a flood of announcements regarding quantum computers around now, in part because some companies want to live up to promised schedules. Most of these involve evolutionary improvements on previous generations of hardware. But this year, we have something new: the first company to market with a new qubit technology.

The technology is called a dual-rail qubit, and it is intended to make the most common form of error trivially easy to detect in hardware, thus making error correction far more efficient. And, while tech giant Amazon has been experimenting with them, a startup called Quantum Circuits is the first to give the public access to dual-rail qubits via a cloud service.

While the tech is interesting on its own, it also provides us with a window into how the field as a whole is thinking about getting error-corrected quantum computing to work.

What’s a dual-rail qubit?

Dual-rail qubits are variants of the hardware used in transmons, the qubits favored by companies like Google and IBM. The basic hardware unit links a loop of superconducting wire to a tiny cavity that allows microwave photons to resonate. This setup allows the presence of microwave photons in the resonator to influence the behavior of the current in the wire and vice versa. In a transmon, microwave photons are used to control the current. But there are other companies that have hardware that does the reverse, controlling the state of the photons by altering the current.

Dual-rail qubits use two of these systems linked together, allowing photons to move from the resonator to the other. Using the superconducting loops, it’s possible to control the probability that a photon will end up in the left or right resonator. The actual location of the photon will remain unknown until it’s measured, allowing the system as a whole to hold a single bit of quantum information—a qubit.

This has an obvious disadvantage: You have to build twice as much hardware for the same number of qubits. So why bother? Because the vast majority of errors involve the loss of the photon, and that’s easily detected. “It’s about 90 percent or more [of the errors],” said Quantum Circuits’ Andrei Petrenko. “So it’s a huge advantage that we have with photon loss over other errors. And that’s actually what makes the error correction a lot more efficient: The fact that photon losses are by far the dominant error.”

Petrenko said that, without doing a measurement that would disrupt the storage of the qubit, it’s possible to determine if there is an odd number of photons in the hardware. If that isn’t the case, you know an error has occurred—most likely a photon loss (gains of photons are rare but do occur). For simple algorithms, this would be a signal to simply start over.

But it does not eliminate the need for error correction if we want to do more complex computations that can’t make it to completion without encountering an error. There’s still the remaining 10 percent of errors, which are primarily something called a phase flip that is distinct to quantum systems. Bit flips are even more rare in dual-rail setups. Finally, simply knowing that a photon was lost doesn’t tell you everything you need to know to fix the problem; error-correction measurements of other parts of the logical qubit are still needed to fix any problems.

The layout of the new machine. Each qubit (gray square) involves a left and right resonance chamber (blue dots) that a photon can move between. Each of the qubits has connections that allow entanglement with its nearest neighbors. Credit: Quantum Circuits

In fact, the initial hardware that’s being made available is too small to even approach useful computations. Instead, Quantum Circuits chose to link eight qubits with nearest-neighbor connections in order to allow it to host a single logical qubit that enables error correction. Put differently: this machine is meant to enable people to learn how to use the unique features of dual-rail qubits to improve error correction.

One consequence of having this distinctive hardware is that the software stack that controls operations needs to take advantage of its error detection capabilities. None of the other hardware on the market can be directly queried to determine whether it has encountered an error. So, Quantum Circuits has had to develop its own software stack to allow users to actually benefit from dual-rail qubits. Petrenko said that the company also chose to provide access to its hardware via its own cloud service because it wanted to connect directly with the early adopters in order to better understand their needs and expectations.

Numbers or noise?

Given that a number of companies have already released multiple revisions of their quantum hardware and have scaled them into hundreds of individual qubits, it may seem a bit strange to see a company enter the market now with a machine that has just a handful of qubits. But amazingly, Quantum Circuits isn’t alone in planning a relatively late entry into the market with hardware that only hosts a few qubits.

Having talked with several of them, there is a logic to what they’re doing. What follows is my attempt to convey that logic in a general form, without focusing on any single company’s case.

Everyone agrees that the future of quantum computation is error correction, which requires linking together multiple hardware qubits into a single unit termed a logical qubit. To get really robust, error-free performance, you have two choices. One is to devote lots of hardware qubits to the logical qubit, so you can handle multiple errors at once. Or you can lower the error rate of the hardware, so that you can get a logical qubit with equivalent performance while using fewer hardware qubits. (The two options aren’t mutually exclusive, and everyone will need to do a bit of both.)

The two options pose very different challenges. Improving the hardware error rate means diving into the physics of individual qubits and the hardware that controls them. In other words, getting lasers that have fewer of the inevitable fluctuations in frequency and energy. Or figuring out how to manufacture loops of superconducting wire with fewer defects or handle stray charges on the surface of electronics. These are relatively hard problems.

By contrast, scaling qubit count largely involves being able to consistently do something you already know how to do. So, if you already know how to make good superconducting wire, you simply need to make a few thousand instances of that wire instead of a few dozen. The electronics that will trap an atom can be made in a way that will make it easier to make them thousands of times. These are mostly engineering problems, and generally of similar complexity to problems we’ve already solved to make the electronics revolution happen.

In other words, within limits, scaling is a much easier problem to solve than errors. It’s still going to be extremely difficult to get the millions of hardware qubits we’d need to error correct complex algorithms on today’s hardware. But if we can get the error rate down a bit, we can use smaller logical qubits and might only need 10,000 hardware qubits, which will be more approachable.

Errors first

And there’s evidence that even the early entries in quantum computing have reasoned the same way. Google has been working iterations of the same chip design since its 2019 quantum supremacy announcement, focusing on understanding the errors that occur on improved versions of that chip. IBM made hitting the 1,000 qubit mark a major goal but has since been focused on reducing the error rate in smaller processors. Someone at a quantum computing startup once told us it would be trivial to trap more atoms in its hardware and boost the qubit count, but there wasn’t much point in doing so given the error rates of the qubits on the then-current generation machine.

The new companies entering this market now are making the argument that they have a technology that will either radically reduce the error rate or make handling the errors that do occur much easier. Quantum Circuits clearly falls into the latter category, as dual-rail qubits are entirely about making the most common form of error trivial to detect. The former category includes companies like Oxford Ionics, which has indicated it can perform single-qubit gates with a fidelity of over 99.9991 percent. Or Alice & Bob, which stores qubits in the behavior of multiple photons in a single resonance cavity, making them very robust to the loss of individual photons.

These companies are betting that they have distinct technology that will let them handle error rate issues more effectively than established players. That will lower the total scaling they need to do, and scaling will be an easier problem overall—and one that they may already have the pieces in place to handle. Quantum Circuits’ Petrenko, for example, told Ars, “I think that we’re at the point where we’ve gone through a number of iterations of this qubit architecture where we’ve de-risked a number of the engineering roadblocks.” And Oxford Ionics told us that if they could make the electronics they use to trap ions in their hardware once, it would be easy to mass manufacture them.

None of this should imply that these companies will have it easy compared to a startup that already has experience with both reducing errors and scaling, or a giant like Google or IBM that has the resources to do both. But it does explain why, even at this stage in quantum computing’s development, we’re still seeing startups enter the field.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Qubit that makes most errors obvious now available to customers Read More »

automatic-braking-systems-save-lives-now-they’ll-need-to-work-at-62-mph.

Automatic braking systems save lives. Now they’ll need to work at 62 mph.

Otherwise, drivers will get mad. “The mainstream manufacturers have to be a little careful because they don’t want to create customer dissatisfaction by making the system too twitchy,” says Brannon, at AAA. Tesla drivers, for example, have proven very tolerant of “beta testing” and quirks. Your average driver, maybe less so.

Based on its own research, IIHS has pushed automakers to install AEB systems able to operate at faster speeds on their cars. Kidd says IIHS research suggests there have been no systemic, industry-wide issues with safety and automatic emergency braking. Fewer and fewer drivers seem to be turning off their AEB systems out of annoyance. (The new rules make it so drivers can’t turn them off.) But US regulators have investigated a handful of automakers, including General Motors and Honda, for automatic emergency braking issues that have reportedly injured more than 100 people, though automakers have reportedly fixed the issue.

New complexities

Getting cars to fast-brake at even higher speeds will require a series of tech advances, experts say. AEB works by bringing in data from sensors. That information is then turned over to automakers’ custom-tuned classification systems, which are trained to recognize certain situations and road users—that’s a stopped car in the middle of the road up ahead or there’s a person walking across the road up there—and intervene.

So to get AEB to work in higher-speed situations, the tech will have to “see” further down the road. Most of today’s new cars come loaded up with sensors, including cameras and radar, which can collect vital data. But the auto industry trade group argues that the Feds have underestimated the amount of new hardware—including, possibly, more expensive lidar units—that will have to be added to cars.

Brake-makers will have to tinker with components to allow quicker stops, which will require the pressurized fluid that moves through a brake’s hydraulic lines to go even faster. Allowing cars to detect hazards at further distances could require different types of hardware, including sometimes-expensive sensors. “Some vehicles might just need a software update, and some might not have the right sensor suite,” says Bhavana Chakraborty, an engineering director at Bosch, an automotive supplier that builds safety systems. Those without the right hardware will need updates “across the board,” she says, to get to the levels of safety demanded by the federal government.

Automatic braking systems save lives. Now they’ll need to work at 62 mph. Read More »

apple-tv+-spent-$20b-on-original-content-if-only-people-actually-watched.

Apple TV+ spent $20B on original content. If only people actually watched.

For example, Apple TV+ is embracing bundles, which is thought to help prevent subscribers from canceling streaming subscriptions. People can currently get Apple TV+ from a Comcast streaming bundle.

And as of last month people can subscribe to and view Apple TV+ through Amazon Prime Video. As my colleague Samuel Axon explained in October, this contradicts Apple’s long-standing approach to streaming “because Apple has long held ambitions of doing exactly what Amazon is doing here: establishing itself as the central library, viewing, search, and payment hub for a variety of subscription offerings.” But without support from Netflix, “Apple’s attempt to make the TV app a universal hub of content has been continually stymied,” Axon noted.

Something has got to give

With the broader streaming industry dealing with high production costs, disappointed subscribers, and growing competition, Apple, like many stakeholders, is looking for new approaches to entertainment. For Apple, that also reportedly includes fewer theatrical releases.

It may also one day mean joining what some streaming subscribers see as the dark side of streaming: advertisements. Apple TV+ currently remains ad-free, but there are suspicions that this could change, with Apple reportedly meeting with the United Kingdom’s TV ratings body recently about ad tracking and its hiring of ad executives.

Apple’s ad-free platform and comparatively low subscription prices are some of the biggest draws for Apple TV+ subscribers, however, which would make changes to either benefit controversial.

But after five years on the market and a reported $20 billion in spending, Apple can’t be happy with 0.3 percent of available streaming viewership. Awards and prestige help put Apple TV+ on the map, but Apple needs more subscribers and eyeballs on its expensive content to have a truly successful streaming business.

Apple TV+ spent $20B on original content. If only people actually watched. Read More »

cable-companies-and-trump’s-fcc-chair-agree:-data-caps-are-good-for-you

Cable companies and Trump’s FCC chair agree: Data caps are good for you

Many Internet users filed comments asking the FCC to ban data caps. A coalition of consumer advocacy groups filed comments saying that “data caps are another profit-driving tool for ISPs at the expense of consumers and the public interest.”

“Data caps have a negative impact on all consumers but the effects are felt most acutely in low-income households,” stated comments filed by Public Knowledge, the Open Technology Institute at New America, the Benton Institute for Broadband & Society, and the National Consumer Law Center.

Consumer groups: Caps don’t manage congestion

The consumer groups said the COVID-19 pandemic “made it more apparent how data caps are artificially imposed restrictions that negatively impact consumers, discriminate against the use of certain high-data services, and are not necessary to address network congestion, which is generally not present on home broadband networks.”

“Unlike speed tiers, data caps do not effectively manage network congestion or peak usage times, because they do not influence real-time network load,” the groups also said. “Instead, they enable further price discrimination by pushing consumers toward more expensive plans with higher or unlimited data allowances. They are price discrimination dressed up as network management.”

Jessica Rosenworcel, who has been FCC chairwoman since 2021, argued last month that consumer complaints show the FCC inquiry is necessary. “The mental toll of constantly thinking about how much you use a service that is essential for modern life is real as is the frustration of so many consumers who tell us they believe these caps are costly and unfair,” Rosenworcel said.

ISPs lifting caps during the pandemic “suggest[s] that our networks have the capacity to meet consumer demand without these restrictions,” she said, adding that “some providers do not have them at all” and “others lifted them in network merger conditions.”

Cable companies and Trump’s FCC chair agree: Data caps are good for you Read More »

a-lot-of-people-are-mistaking-elon-musk’s-starlink-satellites-for-uaps

A lot of people are mistaking Elon Musk’s Starlink satellites for UAPs

That’s just Elon

But many UAP cases have verifiable explanations as airplanes, drones, or satellites, and lawmakers argue AARO might be able to solve more of the cases with more funding.

Airspace is busier than ever with air travel and consumer drones. More satellites are zooming around the planet as government agencies and companies like SpaceX deploy their constellations for Internet connectivity and surveillance. There’s more stuff up there to see.

“AARO increasingly receives cases that it is able to resolve to the Starlink satellite constellation,” the office said in this year’s annual report.

“For example, a commercial pilot reported white flashing lights in the night sky,” AARO said. “The pilot did not report an altitude or speed, and no data or imagery was recorded. AARO assessed that this sighting of flashing lights correlated with a Starlink satellite launch from Cape Canaveral, Florida, the same evening about one hour prior to the sighting.”

Jon Kosloski, director of AARO, said officials compared the parameters of these sightings with Starlink launches. When SpaceX releases Starlink satellites in orbit, the spacecraft are initially clustered together and reflect more sunlight down to Earth. This makes the satellites easier to see during twilight hours before they raise their orbits and become dimmer.

“We found some of those correlations in time, the direction that they were looking, and the location,” Kosloski said. “And we were able to assess that they were all in those cases looking at Starlink flares.”

SpaceX has more than 6,600 Starlink satellites in low-Earth orbit, more than half of all active spacecraft. Thousands more satellites for Amazon’s Kuiper broadband constellation and Chinese Internet network are slated to launch in the next few years.

“AARO is investigating if other unresolved cases may be attributed to the expansion of the Starlink and other mega-constellations in low-Earth orbit,” the report said.

The Starlink network is still relatively new. SpaceX launched the first Starlinks five years ago. Kosloski said he expects the number of erroneous UAP reports caused by satellites to go down as pilots and others understand what the Starlinks look like.

“It looks interesting and potentially anomalous. But we can model that, and we can show pilots what that anomaly looks like, so that that doesn’t get reported to us necessarily,” Kosloski said.

A lot of people are mistaking Elon Musk’s Starlink satellites for UAPs Read More »

ftc-to-launch-investigation-into-microsoft’s-cloud-business

FTC to launch investigation into Microsoft’s cloud business

The FTC also highlighted fees charged on users transferring data out of certain cloud systems and minimum spend contracts, which offer discounts to companies in return for a set level of spending.

Microsoft has also attracted scrutiny from international regulators over similar matters. The UK’s Competition and Markets Authority is investigating Microsoft and Amazon after its fellow watchdog Ofcom found that customers complained about being “locked in” to a single provider, which offers discounts for exclusivity and charge high “egress fees” to leave.

In the EU, Microsoft has avoided a formal probe into its cloud business after agreeing to a multimillion-dollar deal with a group of rival cloud providers in July.

The FTC in 2022 sued to block Microsoft’s $75 billion acquisition of video game maker Activision Blizzard over concerns the deal would harm competitors to its Xbox consoles and cloud-gaming business. A federal court shot down an attempt by the FTC to block it, which is being appealed. A revised version of the deal in the meantime closed last year following its clearance by the UK’s CMA.

Since its inception 20 years ago, cloud infrastructure and services has grown to become one of the most lucrative business lines for Big Tech as companies outsource their data storage and computing online. More recently, this has been turbocharged by demand for processing power to train and run artificial intelligence models.

Spending on cloud services soared to $561 billion in 2023 with market researcher Gartner forecasting it will grow to $675 billion this year and $825 billion in 2025. Microsoft has about a 20 percent market share over the global cloud market, trailing leader Amazon Web Services that has 31 percent, but almost double the size of Google Cloud at 12 percent.

There is fierce rivalry between the trio and smaller providers. Last month, Microsoft accused Google of running “shadow campaigns” seeking to undermine its position with regulators by secretly bankrolling hostile lobbying groups.

Microsoft also alleged that Google tried to derail its settlement with EU cloud providers by offering them $500 million in cash and credit to reject its deal and continue pursuing litigation.

The FTC and Microsoft declined to comment.

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

FTC to launch investigation into Microsoft’s cloud business Read More »

half-life-2-pushed-steam-on-the-gaming-masses…-and-the-masses-pushed-back

Half-Life 2 pushed Steam on the gaming masses… and the masses pushed back

It’s Half-Life 2 week at Ars Technica! This Saturday, November 16, is the 20th anniversary of the release of Half-Life 2—a game of historical importance for the artistic medium and technology of computer games. Each day leading up through the 16th, we’ll be running a new article looking back at the game and its impact.

When millions of eager gamers first installed Half-Life 2 20 years ago, many, if not most, of them found they needed to install another piece of software alongside it. Few at the time could imagine that piece of companion software–with the pithy name Steam–would eventually become the key distribution point and social networking center for the entire PC gaming ecosystem, making the idea of physical PC games an anachronism in the process.

While Half-Life 2 wasn’t the first Valve game released on Steam, it was the first high-profile title to require the platform, even for players installing the game from physical retail discs. That requirement gave Valve access to millions of gamers with new Steam accounts and helped the company bypass traditional retail publishers of the day by directly marketing and selling its games (and, eventually, games from other developers). But 2004-era Steam also faced a vociferous backlash from players who saw the software as a piece of nuisance DRM (digital rights management) that did little to justify its existence at the time.

Free (from Vivendi) at last

Years before Half-Life 2’s release, Valve revealed Steam to the world at the 2002 Game Developers Conference, announcing “a broadband business platform for direct software delivery and content management” in a press release. Valve’s vision for a new suite of developer tools for content publishing, billing, version control, and anti-piracy was all present and stressed in that initial announcement.

Perhaps the largest goal for Steam, though, was removing the middlemen of retail game distribution and giving more direct control to game developers, including Valve. “By eliminating the overhead of physical goods distribution, developers will be able to leverage the efficiency of broadband to improve customer service and increase operating margins,” Valve wrote in its 2002 announcement.

Valve’s Gabe Newell on stage unveiling Steam at GDC 2002. Credit: 4Gamer

On stage at GDC, Valve founder Gabe Newell took things even further, positioning himself as “a new-age Robin Hood who wanted to take from the greedy publishers what independent game developers deserved: a larger piece of the revenue pie,” as Gamespot’s Final Days of Half-Life 2 feature summed it up. Cutting out the publishers and retailers, Newell said, could be the difference between a developer taking home $7 or $30 on a full-price game (which generally ran $50 at the time).

Half-Life 2 pushed Steam on the gaming masses… and the masses pushed back Read More »

dragon-age:-the-veilguard-and-the-choices-you-make-while-saving-the-world

Dragon Age: The Veilguard and the choices you make while saving the world


“Events are weaving together quickly. The fate of the world shall be decided.”

Dragon Age: The Veilguard is as much about the world, story, and characters as the gameplay. Credit: EA

BioWare’s reputation as a AAA game development studio is built on three pillars: world-building, storytelling, and character development. In-game codices offer textual support for fan theories, replays are kept fresh by systems that encourage experimenting with alternative quest resolutions, and players get so attached to their characters that an entire fan-built ecosystem of player-generated fiction and artwork has sprung up over the years.

After two very publicly disappointing releases with Mass Effect: Andromeda and Anthem, BioWare pivoted back to the formula that brought it success, but I’m wrapping up the first third of The Veilguard, and it feels like there’s an ingredient missing from the special sauce. Where are the quests that really let me agonize over the potential repercussions of my choices?

I love Thedas, and I love the ragtag group of friends my hero has to assemble anew in each game, but what really gets me going as a roleplayer are the morally ambiguous questions that make me squirm: the dreadful and delicious BioWare decisions.

Should I listen to the tormented templar and assume every mage I meet is so dangerous that I need to adopt a “strike first, ask questions later” policy, or can I assume at least some magic users are probably not going to murder me on sight? When I find out my best friend’s kleptomania is the reason my city has been under armed occupation for the past 10 years, do I turn her in, or do I swear to defend her to the end?

Questions like these keep me coming back to replay BioWare games over and over. I’ve been eagerly awaiting the release of the fourth game in the Dragon Age franchise so I can find out what fresh dilemmas I’ll have to wrangle, but at about 70 hours in, they seem to be in short supply.

The allure of interactive media, and the limitations

Before we get into some actual BioWare choices, I think it’s important to acknowledge the realities of the medium. These games are brilliant interactive stories. They reach me like my favorite novels do, but they offer a flexibility not available in printed media (well, outside of the old Choose Your Own Adventure novels, anyway). I’m not just reading about the main character’s decisions; I’m making the main character’s decisions, and that can be some heady stuff.

There’s a limit to how much of the plot can be put into a player’s hands, though. A roleplaying game developer wants to give as much player agency as possible, but that has to happen through the illusion of choice. You must arrive at one specific location for the sake of the plot, but the game can accommodate letting you choose from several open pathways to get there. It’s a railroad—hopefully a well-hidden railroad—but at the end of the day, no matter how great the storytelling is, these are still video games. There’s only so much they can do.

So if you have to maintain an illusion of choice but also want to to invite your players to thoughtfully engage with your decision nodes, what do you do? You reward them for playing along and suspending their disbelief by giving their choices meaningful weight inside your shared fantasy world.

If the win condition of a basic quest is a simple “perform action X at location Y,” you have to spice that up with some complexity or the game gets very old very quickly. That complexity can be programmatic, or it can be narrative. With your game development tools, you can give the player more than one route to navigate to location Y through good map design, or you can make action X easier or harder to accomplish by setting preconditions like puzzles to solve or other nodes that need interaction. With the narrative, you’re not limited to what can be accomplished in your game engine. The question becomes, “How much can I give the player to emotionally react to?”

In a field packed with quality roleplaying game developers, this is where BioWare has historically shined: making me have big feelings about my companions and the world they live in. This is what I crave.

Who is (my) Rook, anyway?

The Veilguard sets up your protagonist, Rook, with a lightly sketched backstory tied to your chosen faction. You pick a first name, you are assigned a last name, and you read a brief summary of an important event in Rook’s recent history. The rest is on you, and you reveal Rook’s essential nature through the dialog wheel and the major plot choices you make. Those plot choices are necessarily mechanically limited in scope and in rewards/consequences, but narratively, there’s a lot of ground you can cover.

One version of the protagonist in Dragon Age The Veilguard, with a dialogue wheel showing options

For the record, I picked “Oof.” That’s just how my Rook rolls. Credit: Marisol Cuervo

During the game’s tutorial, you’re given information about a town that has mysteriously fallen out of communication with the group you’re assisting. You and your companions set out to discover what happened. You investigate the town, find the person responsible, and decide what happens to him next. Mechanically, it’s pretty straightforward.

The real action is happening inside your head. As Rook, I’ve just walked through a real horror show in this small village, put together some really disturbing clues about what’s happening, and I’m now staring down the person responsible while he’s trapped inside an uncomfortably slimy-looking cyst of material the game calls the Blight. Here is the choice: What does my Rook decide to do with him, and what does that choice say about her character? I can’t answer that question without looking through the lens of my personal morality, even if I intend for Rook to act counter to my own nature.

My first emotional, knee-jerk reaction is to say screw this guy. Leave him to the consequences of his own making. He’s given me an offensively venal justification for how he got here, so let him sit there and stare at his material reward for all the good it will do him while he’s being swallowed by the Blight.

The alternative is saving him. You get to give him a scathing lecture, but he goes free, and it’s because you made that choice. You walked through the center of what used to be a vibrant settlement and saw this guy, you know he’s the one who allowed this mess to happen, and you stayed true to your moral center anyway. Don’t you feel good? Look at you, big hero! All those other people will die from the Blight, but you held the line and said, “Well, not this one.”

A dialogue wheel gives the player a decisive choice

Being vindictive might feel good, but I feel leaving him is a profoundly evil choice. Credit: Marisol Cuervo

There’s no objectively right answer about what to do with the mayor, and I’m here for it. Leaving him or saving him: Neither option is without ethical hazards. I can use this medium to dig deep into who I am and how I see myself before building up my idea of who my Rook is going to be.

Make your decision, and Rook lives with the consequences. Some are significant, and some… not so much.

Your choices are world-changing—but also can’t be

Longtime BioWare fans have historically been given the luxury of having their choices—large and small—acknowledged by the next game in the franchise. In past games, this happened largely through small dialog mentions or NPC reappearances, but as satisfying as this is for me as a player, it creates a big problem for BioWare.

Here’s an example: depending on the actions of the player, ginger-haired bard and possible romantic companion Leliana can be missed entirely as a recruitable companion in Dragon Age: Origins, the first game in the franchise. If she is recruited, she can potentially die in a later quest. It’s not guaranteed that she survives the first game. That’s a bit of a problem in Dragon Age II, where Leliana shows up in one of the downloadable content packs. It’s a bigger problem in the third game, where Leliana is the official spymaster for the titular Inquisition. BioWare calls these NPCs who can exist in a superposition of states “quantum characters.”

A tweet that says BioWare's default stance is to avoid using quantum characters, but an exception was made for Liliana

One of the game’s creative leaders talking about “quantum characters.” Credit: Marisol Cuervo

If you follow this thought to its logical end, you can understand where BioWare is coming from: After a critical mass of quantum characters is reached, the effects are impossible to manage. BioWare sidesteps the Leliana problem entirely in The Veilguard by just not talking about her.

BioWare has staunchly maintained that, as a studio, it does not have a set canon for the history of its games; there’s only the personal canon each player develops as a result of their gameplay. As I’ve been playing, I can tell there’s been a lot of thought put into ensuring none of The Veilguard’s in-game references to areas covered in the previous three games would invalidate a player’s personal canon, and I appreciate that. That’s not an easy needle to thread. I can also see that the same care was put into ensuring that this game’s decisions would not create future quantum characters, and that means the choices we’re given are very carefully constrained to this story and only this story.

But it still feels like we’re missing an opportunity to make these moral decisions on a smaller scale. Dragon Age: Inquisition introduced a collectible and cleverly hidden item for players to track down while they worked on saving the world. Collect enough trinkets and you eventually open up an entirely optional area to explore. Because this is BioWare, though, there was a catch: To find the trinkets, you had to stare through the crystal eyes of a skull sourced from the body of a mage who has been forcibly cut off from the source of all magic in the world. Is your Inquisitor on board with that, even if it comes with a payoff? Personally, I don’t like the idea. My Inquisitor? She thoroughly looted the joint. It’s a small choice, and it doesn’t really impact the long-term state of the world, but I still really enjoyed working through it.

Later in the first act of The Veilguard, Rook finally gets an opportunity to make one of the big, ethically difficult decisions. I’ll avoid spoilers, but I don’t mind sharing that it was a satisfyingly difficult choice to make, and I wasn’t sure I felt good about my decision. I spent a lot of time staring at the screen before clicking on my answer. Yeah, that’s the good stuff right there.

In keeping with the studio’s effort to avoid creating quantum worldstates, The Veilguard treads lightly with the mechanical consequences of this specific choice and the player is asked to take up the narrative repercussions. How hard the consequences hit, or if they miss, comes down to your individual approach to roleplaying games. Are you a player who inhabits the character and lives in the world? Or is it more like you’re riding along, only watching a story unfold? Your answer will greatly influence how connected you feel to the choices BioWare asks you to make.

Is this better or worse?

Much online discussion around The Veilguard has centered on Bioware’s decision to incorporate only three choices from the previous game in the series, Inquisition, rather than using the existing Dragon Age Keep to import an entire worldstate. I’m a little disappointed by this, but I’m also not sure anything in Thedas is significantly changed because my Hero of Ferelden was a softie who convinced the guard in the Ostagar camp to give his lunch to the prisoner who was in the cage for attempted desertion.

At the same time, as I wrap up the first act, I’m missing the mild tension I should be feeling when the dialog wheel comes up, and not just because many of the dialog choices seem to be three flavors of “yes, and…” One of my companions was deeply unhappy with me for a period of time after I made the big first-act decision and sharply rebuffed my attempts at justification, snapping at me that I should go. Previous games allowed companions to leave your party forever if they disagreed enough with your main character; this doesn’t seem to be a mechanic you need to worry about in The Veilguard.

Rook’s friends might be divided on how they view her choice of verbal persuasion versus percussive diplomacy, but none of them had anything to say about it while she was very earnestly attempting to convince a significant NPC they were making a pretty big mistake. One of Rook’s companions later asked about her intentions during that interaction but otherwise had no reaction.

Another dialogue choice in Veilguard

BioWare, are you OK? Why do you keep punching people who don’t agree with you? Credit: Marisol Cuervo

Seventy hours into the game, I’m looking for places where I have to navigate my own ethical landscape before I can choose to have Rook conform to, or flaunt, the social mores of northern Thedas. I’m still helping people, being the hero, and having a lot of fun doing so, but the problems I’m solving aren’t sticky, and they lack the nuance I enjoyed in previous games. I want to really wrestle with the potential consequences before I decide to do something. Maybe this is something I’ll see more of in the second act.

If the banal, puppy-kicking kind of evil has been minimized in favor of larger stakes—something I applaud—it has left a sort of vacuum on the roleplaying spectrum. BioWare has big opinions about how heroes should act and how they should handle interpersonal conflict. I wish I felt more like I was having that struggle rather than being told that’s how Rook is feeling.

I’m hopeful my Rook isn’t just going to just save the world, but that in the next act of the game, I’ll see more opportunities from BioWare to let her do it her way.

Dragon Age: The Veilguard and the choices you make while saving the world Read More »

trump-says-elon-musk-will-lead-“doge,”-a-new-department-of-government-efficiency

Trump says Elon Musk will lead “DOGE,” a new Department of Government Efficiency

Trump’s “perfect gift to America”

Trump’s statement said the department, whose name is a reference to the Doge meme, “will drive out the massive waste and fraud which exists throughout our annual $6.5 Trillion Dollars of Government Spending.” Trump said DOGE will “liberate our Economy” and that its “work will conclude no later than July 4, 2026” because “a smaller Government, with more efficiency and less bureaucracy, will be the perfect gift to America on the 250th Anniversary of The Declaration of Independence.”

“I look forward to Elon and Vivek making changes to the Federal Bureaucracy with an eye on efficiency and, at the same time, making life better for all Americans,” Trump said. Today, Musk wrote that the “world is suffering slow strangulation by overregulation,” and that “we finally have a mandate to delete the mountain of choking regulations that do not serve the greater good.”

Musk has been expected to have influence in Trump’s second term after campaigning for him. Trump previously vowed to have Musk head a government efficiency commission. “That would essentially give the world’s richest man and a major government contractor the power to regulate the regulators who hold sway over his companies, amounting to a potentially enormous conflict of interest,” said a New York Times article last month.

The Wall Street Journal wrote today that “Musk isn’t expected to become an official government employee, meaning he likely wouldn’t be required to divest from his business empire.”

Trump says Elon Musk will lead “DOGE,” a new Department of Government Efficiency Read More »

what-did-the-snowball-earth-look-like?

What did the snowball Earth look like?

All of which raises questions about what the snowball Earth might have looked like in the continental interiors. A team of US-based geologists think they’ve found some glacial deposits in the form of what are called the Tavakaiv sandstones in Colorado. These sandstones are found along the Front Range of the Rockies, including areas just west of Colorado Springs. And, if the authors’ interpretations are correct, they formed underneath a massive sheet of glacial ice.

There are lots of ways to form sandstone deposits, and they can be difficult to date because they’re aggregates of the remains of much older rocks. But in this case, the Tavakaiv sandstone is interrupted by intrusions of dark colored rock that contains quartz and large amounts of hematite, a form of iron oxide.

These intrusions tell us a remarkable number of things. For one, some process must have exerted enough force to drive material into small faults in the sandstone. Hematite only gets deposited under fairly specific conditions, which tells us a bit more. And, most critically, hematite can trap uranium and the lead it decays into, providing a way of dating when the deposits formed.

Under the snowball

Depending on which site was being sampled, the hematite produced a range of dates, from as recent as 660 million years ago to as old as 700 million years. That means all of them were formed during what’s termed the Sturtian glaciation, which ran from 715 million to 660 million years ago. At the time, the core of what is now North America was in the equatorial region. So, the Tavakaiv sandstones can provide a window into what at least one continent experienced during the most severe global glaciation of the Cryogenian Period.

What did the snowball Earth look like? Read More »