India’s digital payment platforms process trillions of dollars a year through UPI, the government-built real-time payments rail that handles more than 90% of all payment transactions in the country, but one of their largest net revenue line items is not a payment product at all: it’s a cheap plastic speaker that sits on a shopkeeper’s counter and reads out incoming payments aloud.
The roughly 23 million soundboxes deployed across India earn about $220 million a year in rental fees, more than every explicitly UPI-linked revenue line in the ecosystem combined, according to estimates from Bernstein. Each device costs $7-12 to manufacture and earns its platform $7-10 a year in rent. A story adds: PhonePe processes about 48% of all UPI transactions in India. Its net payment processing revenue in H1 FY26 was about $83 million. Its device revenue was about $34 million. Running nearly half of India’s real-time payment infrastructure earns PhonePe only 2.4 times what it makes from renting speakers to shopkeepers.
An anonymous reader quotes a report from the Guardian: Tech companies are conflating traditional artificial intelligence with generative AI when claiming the energy-hungry technology could help avert climate breakdown, according to a report. Most claims that AI can help avert climate breakdown refer to machine learning and not the energy-hungry chatbots and image generation tools driving the sector’s explosive growth of gas-guzzling datacenters, the analysis of 154 statements found.
The research, commissioned by nonprofits including Beyond Fossil Fuels and Climate Action Against Disinformation, did not find a single example where popular tools such as Google’s Gemini or Microsoft’s Copilot were leading to a “material, verifiable, and substantial” reduction in planet-heating emissions. Ketan Joshi, an energy analyst and author of the report, said the industry’s tactics were “diversionary” and relied on tried and tested methods that amount to “greenwashing.”
He likened it to fossil fuel companies advertising their modest investments in solar panels and overstating the potential of carbon capture. “These technologies only avoid a minuscule fraction of emissions relative to the massive emissions of their core business,” said Joshi. “Big tech took that approach and upgraded and expanded it.” […] Joshi said the discourse around AI’s climate benefits needed to be “brought back to reality.” “The false coupling of a big problem and a small solution serves as a distraction from the very preventable harms being done through unrestricted datacenter expansion,” he said.
No worries if the US doesn’t want to be friends with Europe anymoreLockheed Martin’s F-35 fighter aircraft can be jailbroken “just like an iPhone,” the Netherlands’ defense secretary has claimed.…
According to Lara Trump, Donald Trump has prepared but not yet delivered a speech about extraterrestrial life, though the White House says such a speech would be “news to me.” White House Spokesperson Karoline Leavitt continued: “I’ll have to check in with our speech writing team. Uh, and that would be of great interest to me personally, and I’m sure all of you in this room and apparently former President Obama, too.” The Hill reports: Lara Trump, speaking on the Pod Force One podcast, said the president has played coy when she and her husband Eric have asked about the existence of UFO’s and aliens. “We’ve kind of asked my father-in-law about this… we all want to know about the UFOs… and he played a little coy with us,” Lara Trump said. “I’ve heard kind of around, I think my father-in-law has actually said it, that there is some speech that he has, that I guess at the right time, I don’t know when the right time is, he’s going to break out and talk about and it has to do with maybe some sort of extraterrestrial life.”
Obama has clarified in recent days that he has seen no evidence that aliens are real, after comments he made on a podcast with Brian Tyler Cohen seeming to confirm his knowledge of extraterrestrial life went viral. “They’re real but I haven’t seen them,” Obama said on the podcast. “And they’re not being kept in… what is it? Area 51. There’s no underground facility unless there’s this enormous conspiracy and they hid it from the president of the United States.”
Later, in a post on Instagram, Obama clarified that he was trying to answer in the light-hearted spirit of a speed round of questions and that, “Statistically, the universe is so vast that the odds are good there’s life out there.” “But the distances between solar systems are so great that the chances we’ve been visited by aliens is low, and I saw no evidence during my presidency that extraterrestrials have made contact with us. Really!”
An anonymous reader quotes a report from the New York Times: The first shot has been fired in the legal war over the Environmental Protection Agency’s rollback of its “endangerment finding,” which had been the foundation for federal climate regulations. Environmental and health groups filed a lawsuit on Wednesday morning in the U.S. Court of Appeals for the District of Columbia Circuit, arguing that the E.P.A.’s move to eliminate limits on greenhouse gases from vehicles, and potentially other sources, was illegal. The suit was triggered by last week’s decision by the E.P.A. to kill one of its key scientific conclusions, the endangerment finding, which says that greenhouse gases harm public health. The finding had formed the basis for climate regulations in the United States.
The lawsuit claims that the agency is rehashing arguments that the Supreme Court already considered, and rejected, in a landmark 2007 case, Massachusetts v. E.P.A. The issue is likely to end up back before the Supreme Court, which is now far more conservative. In the 2007 case, the justices ruled that the E.P.A. was required to issue a scientific determination as to whether greenhouse gases were a threat to public health under the 1970 Clean Air Act and to regulate them if they were. As a result, two years later, in 2009, the E.P.A. issued the endangerment finding, allowing the government to limit greenhouse gas emissions, which cause climate change. “With this action, E.P.A. flips its mission on its head,” said Hana Vizcarra, a senior lawyer at the nonprofit Earthjustice, which is representing six groups in the lawsuit. “It abandons its core mandate to protect human health and the environment to boost polluting industries and attempts to rewrite the law in order to do so.”
[…] Also on Wednesday, two other nonprofit law firms filed their own lawsuit against the E.P.A. over the endangerment finding, on behalf of 18 youth plaintiffs. That suit, by Our Children’s Trust and Public Justice, argues that the E.P.A.’s move was unconstitutional. Separate legal challenges to E.P.A. rules are generally consolidated into one case at the D.C. Circuit Court, which is where disputes involving the Clean Air Act are required to be heard. But the sheer number of groups involved could make the legal battle lengthy and complicated to manage. A three-judge panel at the Circuit Court is expected to pore over several rounds of legal briefs before oral arguments begin. Those may not take place until next year.
Data Loss Prevention? Yeah, about that…The bot couldn’t keep its prying eyes away. Microsoft 365 Copilot Chat has been summarizing emails labeled “confidential” even when data loss prevention policies were configured to prevent it.…
Uber plans to invest $100 million in EV charging infrastructure to support current and future robotaxi fleets in cities like Los Angeles, the Bay Area, and Dallas, “eventually partner[ing] with multiple robotaxi companies on actual robotaxi deployment — WeRide, Waabi, Lucid, Nuro, May Mobility, Momenta, and Waymo of course,” reports CleanTechnica. From the report: “Cities can only unlock the full promise of autonomy and electrification if the right charging infrastructure is built for scale. That infrastructure needs to work for today’s drivers and the fleets of the future,” said Uber’s global head of mobility, Pradeep Parameswaran. In addition to building some infrastructure itself, the company is making “utilization guarantee agreements” with EVgo for various major US cities as well as Electra, Hubber, and Ionity in Europe.
On Uber’s latest shareholder call, CEO Dara Khosrowshahi said that the company would make “targeted growth-oriented investments aligned with the 6 strategic areas of focus.” That includes self-driving vehicles/robotaxis. “With the benefit of learning from multiple AV deployments around the world, we’re more convinced than ever that AVs will unlock a multitrillion-dollar opportunity for Uber. AVs amplify the fundamental strengths of our platform, global scale, deep demand density, sophisticated marketplace technology, and decades of on-the-ground experience matching riders, drivers, and vehicles, all in real time,” Khosrowshahi added.
Google’s Pixel 10a is essentially a flatter version of last year’s Pixel 9a, keeping the same Tensor G4 chip, camera hardware, RAM, storage, and $500 price while dropping features like Pixelsnap Qi2 charging and advanced Gemini AI capabilities found in higher-end models. Gizmodo reports: We use words like “candy bar” or “slab” to describe our full-screen smartphones, but Google has designed what is likely the slabbiest phone of the modern era. During an hour-long hands-on with Google’s all-new Google Pixel 10a, I slid the phone across a desk and felt oddly satisfied that it could glide as neatly as a figure skater without any hint of a camera bump hindering its path. It’s the first thing I need to bring up regarding the Pixel 10a, because there’s no other discernible difference between this phone and the previous-gen Pixel 9a.
And that seems to be the point. The Pixel 10a starts at $500, exactly how much the Pixel 9a cost at launch. In a Q&A with journalists, Google told Gizmodo that the company wanted to offer the same price point as before. That apparently required Google to stick with the same Tensor G4 chip as last year. You still have the same storage options of 128GB or 256GB and the minimum of 8GB of RAM. Think of the Pixel 10a as a Pixel 9a with a reduced camera bump. If you’re one of the heretics who uses a phone without a case, that fact alone may be enough to pay attention. Otherwise, you’ll be scrounging to find any real difference between the Pixel 10a and one of last year’s best mid-range phones.
An anonymous reader quotes a report from the New York Times: Meta is preparing to spend $65 million this year to boost state politicians who are friendly to the artificial intelligence industry, beginning this week in Texas and Illinois, according to company representatives. The sum is the biggest election investment by Meta, which owns Facebook, Instagram and WhatsApp. The company was previously cautious about campaign engagements, making small donations out of a corporate political action committee and contributing to presidential inaugurations. It also let executives like Sheryl Sandberg, who was chief operating officer, support candidates in their personal capacities.
Now Meta is betting bigger on politics, driven by concerns over the regulatory threat to the artificial intelligence industry as it aims to beat back legislation in states that it fears could inhibit A.I. development, company representatives said. To do that, Meta is quietly starting two new super PACs, according to federal filings surfaced by The New York Times. One group, Forge the Future Project, is backing Republicans. Another, Making Our Tomorrow, is backing Democrats. The new PACs join two others already started by Meta, one of which is focused on California while the other is an umbrella organization that finances the company’s spending in other states. In total, the four super PACs have an initial budget of $65 million, according to federal and state filings. Meta’s spending is set to start this week in Illinois and Texas, where the company generally favors backing Democratic and Republican incumbents or engaging in open races rather than deposing existing officials, company representatives said in interviews.
[…] Last year, Meta’s public policy vice president, Brian Rice, said the company would start spending in politics because of “inconsistent regulations that threaten homegrown innovation and investments in A.I.” The company started its first two super PACs, American Technology Excellence Project and Mobilizing Economic Transformation Across California. Meta put $45 million into American Technology Excellence Project in September. That money is expected, in turn, to flow to Forge the Future Project, Making Our Tomorrow and potentially to other entities. […] In California, which has some of the country’s most onerous campaign-finance disclosures, Meta in August put $20 million into Mobilizing Economic Transformation Across California, which shortens to META California. State laws require the sponsoring company to be disclosed in the name of the entity. In December, Meta put $5 million into another California committee called California Leads, which is focused on promoting moderate business policy and not A.I., according to state records.
Mark Zuckerberg took the stand Wednesday in a high-profile jury trial over social media addiction. In an appearance that was described byNBC News as “combative,” the Facebook founder reportedly said that Meta’s goal was to make Instagram “useful” not increase the time users are spending in the app.
On the stand, Zuckerberg was questioned about a company document that said improving engagement was among “company goals,” according to CNBC. But Zuckerberg claimed that the company had “made the conscious decision to move away from those goals, focusing instead on utility,” according to The Associated Press. “If something is valuable, people will use it more because it’s useful to them,” he said.
The trial stems from a lawsuit brought by a California woman identified as “KGM” in court documents. The now 20-year-old alleges that she was harmed as a child by addictive features in Instagram, YouTube, Snapchat and TikTok. TikTok and Snap opted to settle before the case went to trial.
Zuckerberg was also asked about previous public statements, including his remarks on Joe Rogan’s podcast last year that he can’t be fired by Meta’s board because he controls a majority of the voting power. According to The New York Times, Zuckerberg accused the plaintiffs’ lawyer of “mischaracterizing” his past comments more than a dozen times.
Zuckerberg’s appearance in court also apparently prompted the judge to warn people in the courtroom not to record the proceedings using AI glasses. As CNBC notes, members of Zuckerberg’s entourage were spotted wearing Meta’s smart glasses as the CEO was escorted into the courthouse. It’s unclear if anyone was actually using the glasses in court, but legal affairs journalist Meghann Cuniff reported that the judge was particularly concerned about the possibility of jurors being recorded or subjected to facial recognition. (Meta’s smart glasses do not currently have native facial recognition abilities, but recent reports suggest the company is considering adding such features.)
The Los Angeles trial has been closely watched not just because it marked a rare in-court appearance for Zuckerberg. It’s among the first of several cases where Meta will face allegations that its platforms have harmed children. In this case and in a separate proceeding in New Mexico, Meta’s lawyers have cast doubt on the idea that social media should be considered a real addiction. Instagram chief Adam Mosseri previously testified in the same Los Angeles trial that Instagram isn’t “clinically addictive.”
This article originally appeared on Engadget at https://www.engadget.com/social-media/mark-zuckerberg-testifies-in-social-media-addiction-trial-that-meta-just-wants-instagram-to-be-useful-234332316.html?src=rss
visionOS 26.4 will bring foveated streaming to Apple Vision Pro, enabling higher-quality wireless VR remote rendering from a local or cloud PC.
Before you continue reading, note that foveated streaming is not the same as foveated rendering, though the two techniques can be used alongside each other. As the names suggest, while foveated rendering involves the host device actually rendering the area of each frame you’re currently looking at with higher resolution, foveated streaming refers to actually sending that area to the headset with higher resolution.
It’s a term you may have heard in the context of Valve’s Steam Frame, where it’s a fundamental always-on feature of its PC VR streaming offering, delivered via the USB PC wireless adapter.
Given that the video decoders in headsets have a limited maximum resolution and bitrate, foveated streaming
0:00
/0:05
Valve’s depiction of foveated streaming.
Unlike the macOS Spatial Rendering introduced in the main visionOS 26 release last year, which is a relatively high-level system that only supports a local Mac as a host, Apple’s developer documentation describes the new Foveated Streaming as a low-level host-agnostic framework.
The documentation highlights Nvidia’s CloudXR SDK as an example host, while noting that it should also work with local PCs. Apple even has a Windows OpenXR sample available on GitHub, which to our knowledge is the first and only time the company has even mentioned the industry-standard XR API, never mind actually using it.
The lead developer of the visionOS port of the PC VR streaming app ALVR, Max Thomas, tells UploadVR that he’s currently looking into adding support for foveated streaming, but that it will likely be “a lot of work”.
Because of how the feature works, Apple’s foveated streaming might even enable foveated rendering for tools like ALVR.
Normally, visionOS does not provide developers with any information about where the user is looking – Apple says this is in order to preserve privacy. Instead, developers only receive events, such as which element the user was looking at as they performed the pinch gesture. But crucial to foveated streaming working, the API tells the developer the “rough” region of the frame the user is looking at.
This should allow the host to render at higher resolution in this region too, not just stream it in higher resolution. As always, this will require the specific VR game to support foveated rendering, or to support tools that inject foveated rendering.
0:00
/0:24
Clip from Apple’s visionOS foveated streaming sample app.
Interestingly, Apple’s documentation also states that visionOS supports displaying both rendered-on-device and remote content simultaneously. The company gives the example of rendering the interior of a car or aircraft on the headset while streaming the highly detailed external world on a powerful cloud PC, which would be preferable from a perceived latency and stability perspective to rendering everything in the cloud.
We’ll keep an eye on the visionOS developer community in the coming months, especially the enterprise space, for any interesting uses of Apple’s foveated streaming framework in practice.
Mark Zuckerberg is testifying in a landmark Los Angeles trial examining whether Meta and other social media firms can be held liable for designing platforms that allegedly addict and harm children. NBC News reports: It’s the first of a consolidated group of cases — from more than 1,600 plaintiffs, including over 350 families and over 250 school districts — scheduled to be argued before a jury in Los Angeles County Superior Court. Plaintiffs accuse the owners of Instagram, YouTube, TikTok and Snap of knowingly designing addictive products harmful to young users’ mental health. Historically, social media platforms have been largely shielded by Section 230, a provision added to the Communications Act of 1934, that says internet companies are not liable for content users post. TikTok and Snap reached settlements with the first plaintiff, a 20-year-old woman identified in court as K.G.M., ahead of the trial. The companies remain defendants in a series of similar lawsuits expected to go to trial this year.
[…] Matt Bergman, founding attorney of Social Media Victims Law Center — which is representing about 750 plaintiffs in the California proceeding and about 500 in the federal proceeding — called Wednesday’s testimony “more than a legal milestone — it is a moment that families across this country have been waiting for.” “For the first time, a Meta CEO will have to sit before a jury, under oath, and explain why the company released a product its own safety teams warned were addictive and harmful to children,” Bergman said in a statement Tuesday, adding that the moment “carries profound weight” for parents “who have spent years fighting to be heard.” “They deserve the truth about what company executives knew,” he said. “And they deserve accountability from the people who chose growth and engagement over the safety of their children.”
Last year Dyson introduced the PencilVac, which it immediately declared the “world’s slimmest vacuum cleaner.” Presumably, then, the title of world’s slimmest wet floor cleaner goes to the newly unveiled PencilWash.
Promising a “lighter, slimmer and smaller solution to wet cleaning without compromising on hygiene,” the PencilWash is designed to let you clean everywhere you need to with minimal hassle. Like the vacuum cleaner with which it shares the first part of its name, the handle measures just 1.5 inches in diameter from top to bottom, and the whole thing weighs little more than 2kg.
The ultra-thin design allows the cleaner to lie almost completely flat, allowing you to get into tight corners or under low furniture, where more traditionally bulky devices might struggle. Its slender proportions also make it easier to store if your home is on the smaller side.
Dyson says the PencilWash only applies fresh water to floors, and after swiftly eliminating spills and stains it should dry up pretty quickly. Its high-density microfiber roller is designed to tackle both wet and dry debris in one pass, and because it doesn’t have a traditional filter, you won’t have to worry about trapped dirt or lingering smells.
Above the power buttons there’s a screen displaying remaining battery level, and the handle can be slotted into a charging dock when not in use.
The Dyson PencilVac will cost $349, with a release date yet to be announced.
This article originally appeared on Engadget at https://www.engadget.com/home/dyson-announces-the-pencilwash-wet-floor-cleaner-230152299.html?src=rss
Earlier this month we spotted the addition of a new GFX1170 GPU target in the AMDGPU LLVM back-end. Making this GFX1170 target interesting is that its marked as an APU/SoC part with “RDNA 4m” while being part of the GFX11 series. The GFX11 series is for RDNA3, GFX115x is for RDNA 3.5, and GFX12 is RDNA4. More ISA changes have now been committed to the AMDGPU LLVM back-end that make a few more instruction differences better aligned with RDNA4…
Google is bringing its Lyria 3 AI music model into the Gemini app, allowing users to generate 30-second songs from text, images, or video prompts directly within the chatbot. The Verge reports: Lyria 3’s text-to-music capabilities allow Gemini app users to make songs by describing specific genres, moods, or memories, such as asking for an “Afrobeat track for my mother about the great times we had growing up.” The music generator can make instrumental audio and songs with lyrics composed automatically based on user prompts. Users can also upload photographs and video references, which Gemini then uses to generate a track with lyrics that fit the vibe.
“The goal of these tracks isn’t to create a musical masterpiece, but rather to give you a fun, unique way to express yourself,” Google said in its announcement blog. Gemini will add custom cover art generated by Nano Banana to songs created on the app, which aims to make them easier to share and download. Google is also bringing Lyria 3 to YouTube’s Dream Track tool, which allows creators to make custom AI soundtracks for Shorts.
Dream Track and Lyria were initially demonstrated with the ability to mimic the style and voice of famous performers. Google says it’s been “very mindful” of copyright in the development of Lyria 3 and that the tool “is designed for original expression, not for mimicking existing artists.” When prompted for a specific artist, Gemini will make a track that “shares a similar style or mood” and uses filters to check outputs against existing content.
Co-op parcel delivery horror game Deadly Delivery adds a new ‘Mystery Room’, door microphone, and other new mechanics.
We previously reviewed Flat Head Studio’s Deadly Delivery, finding it to be a “clever, effective, and genuinely funny VR co-op that nails the feel of physical play in a spooky, comic world.” Flat Head has already updated the game with new content several times since its December launch, adding a new Ice Caves location and several quality of life features.
The Mystery Room adds a new room to the Bloodmoon and Ice Cave levels with more doors for players to explore. Some doors now have a microphone where players have to declare themselves before proceeding with the drop-off. A new item called the Door Reuser is available to purchase from the in-game shop as well, allowing players to deliver an extra package to a door.
0:00
/1:00
The update also includes general bug fixes, an ammo increase for the Roulette Gun, and wider passages in certain areas to allow multiple players to move around easier.
Deadly Delivery is available on Meta Quest and Steam for $9.99.
We may earn a commission from links on this page. Deal pricing and availability subject to change after time of publication.
Google is releasing their budget a-series version of the Pixel 10 on March 5, a whole month earlier than the 9a was released in 2025. There is not much dividing the 10a from the 9a, but there are a few software updates that can make it worth it for some people. Throw in a $100 Amazon gift card, and it’s hard to say no. Google has the pre-orders for the Google 10a out already, going for $499, plus the gift card. Alternatively, you can get their Pixel Buds 2a instead of the $100 gift card for the same $499 price.
Google Pixel 10a – Obsidian – 128 GB with Pixel Buds 2a Bundle
$499.00 at Amazon
$628.00 Save $129.00
$499.00 at Amazon
$628.00 Save $129.00
Google Pixel 10a 128GB 6.3″ Unlocked Smartphone + $100 Gift Card
$499.00 at Amazon
$599.00 Save $100.00
$499.00 at Amazon
$599.00 Save $100.00
Lifehacker’s Associate Tech Editor Michelle Ehrhardt actually got her hands on the Pixel 10a during a recent Google demo event. As she pointed out, the specs on the Pixel 10a are not like the Pixel 10. It’s more similar to the 9 instead. It has a Tensor G4 processor, 8GB of RAM, and up to 256GB of storage, as well as the same camera system, with a 48MP main lens, a 13MP ultrawide lens, and a 13MP selfie camera. The battery life is the same 30+ hours, too, and the MagSafe-like Pixelsnap feature is gone. The main upgrade here is a brighter 3,000 nits screen, a thinner bezel, and an improved Corning Gorilla Glass 7i cover glass. But the value might be in the software and AI.
There are two AI camera features that debuted with the Pixel 10. One is Auto Best Take, which takes 150 frames in one click, chooses the best picture, and automatically deletes the rest (or stitches together elements from multiple shots to make a new “best” image). And Camera Coach, which guides you with AI on how to take the best picture. Google also brought Satellite SOS for the first time to an a-series phone. It lets you connect to a Satellite and ping emergency services for help if you have no cell signal.
If you’re thinking of upgrading from a Pixel 9a or later, there’s not much here to make it worth it. However, if you have anything older than a Pixel 9 or are switching to Pixel for the first time, this is a great opportunity and phone to do so.
When Apple released the first beta for iOS 26.4 this week, testers immediately got to work looking for each and every new feature and change. To their credit, there’s more new here than in iOS 26.3, including an AI playlist generator for Apple Music and support for end-to-end encryption with RCS (finally). But one update slipped under the radar, since it’s not actually available to test in this first beta: CarPlay support for AI assistants like ChatGPT, Claude, and Gemini.
AI assistants are coming to CarPlay in iOS 26.4
As spotted by MacRumors, CarPlay’s Developer Guide spills the beans on this upcoming integration. On page 13, the entitlement “CarPlay voice-based conversational app” is listed with a minimum iOS version of iOS 26.4. While it doesn’t specifically mention integrations with ChatGPT, Claude, and Gemini, the documentation does suggest that voice-based conversational apps are a supported app type in iOS 26.4. As such, MacRumors is reporting that companies that make chatbots (i.e. OpenAI, Anthropic, and Google) will need to update their apps to work with CarPlay.
According to MacRumors, drivers will be able to ask apps like ChatGPT, Claude, and Gemini questions while on the road, but they won’t be able to control functions of the car or the driver’s iPhone. You also won’t be able to use a “wake word” to activate the assistant (e.g. “Hey ChatGPT,” or “OK, Gemini”), so you’ll need to tap on the app itself to talk to the assistant.
Apple is issuing guidance to developers on how to implement these assistants in CarPlay starting with this latest update. On page seven, Apple notes that voice-based conversational apps must only work when voice features are actively being used, and avoid showing text or imagery when responding to queries. It’s the first time Apple is allowing developers of “voice-based conversational” apps to develop for CarPlay. While the company has allowed other developers to make apps for its in-car experience, it has obviously put limitations on what types of apps can get through. It makes sense for Google to develop a Google Maps CarPlay app, but TikTok has no business offering drivers a CarPlay-version of its algorithm.
Install the iOS 26.4 at your own risk
This addition is coming to iOS 26.4, but likely in a future beta. Don’t install the beta at this time expecting to try this feature out—though, you should think twice before installing the beta at all. Betas like iOS 26.4 are temperamental, as Apple is currently testing the software for bugs and stability issues. By installing it early, you risk dealing with those issues, which could impact how you use your iPhone, or even result in data loss.