NASA officials said Thursday they have decided to bring home four of the seven crew members on the International Space Station after one of them experienced a “medical situation” earlier this week.
The space agency has said little about the incident, and officials have not identified which crew member suffered the medical issue. James “JD” Polk, NASA’s chief health and medical officer, told reporters Thursday the crew member is “absolutely stable” but that the agency is “erring on the side of caution” with the decision to bring to return the astronaut to Earth.
The ailing astronaut is part of the Crew-11 mission, which launched to the station August 1 and was slated to come back to Earth around February 20. Instead, the Crew-11 astronauts will depart the International Space Station (ISS) in the coming days and head for reentry and a parachute-assisted splashdown in the Pacific Ocean off the coast of California.
An anonymous reader quotes a report from NPR: [I]t turns out that some genius dogs can learn a brand new word, like the name of an unfamiliar toy, by just overhearing brief interactions between two people. What’s more, these “gifted” dogs can learn the name of a new toy even if they first hear this word when the toy is out of sight — as long as their favorite human is looking at the spot where the toy is hidden. That’s according to a new study in the journal Science. “What we found in this study is that the dogs are using social communication. They’re using these social cues to understand what the owners are talking about,” says cognitive scientist Shany Dror of Eotvos Lorand University and the University of Veterinary Medicine, Vienna. “This tells us that the ability to use social information is actually something that humans probably had before they had language,” she says, “and language was kind of hitchhiking on these social abilities.”
[…] “There’s only a very small group of dogs that are able to learn this differentiation and then can learn that certain labels refer to specific objects,” she says. “It’s quite hard to train this and some dogs seem to just be able to do it.” […] To explore the various ways that these dogs are capable of learning new words, Dror and some colleagues conducted a study that involved two people interacting while their dog sat nearby and watched. One person would show the other a brand new toy and talk about it, with the toy’s name embedded into sentences, such as “This is your armadillo. It has armadillo ears, little armadillo feet. It has a tail, like an armadillo tail.” Even though none of this language was directed at the dogs, it turns out the super-learners registered the new toy’s name and were later able to pick it out of a pile, at the owner’s request.
To do this, the dogs had to go into a separate room where the pile was located, so the humans couldn’t give them any hints. Dror says that as she watched the dogs on camera from the other room, she was “honestly surprised” because they seemed to have so much confidence. “Sometimes they just immediately went to the new toy, knowing what they’re supposed to do,” she says. “Their performance was really, really high.” She and her colleagues wondered if what mattered was the dog being able to see the toy while its name was said aloud, even if the words weren’t explicitly directed at the dog. So they did another experiment that created a delay between the dog seeing a new toy and hearing its name. The dogs got to see the unfamiliar toy and then the owner dropped the toy in a bucket, so it was out of sight. Then the owner would talk to the dog, and mention the toy’s name, while glancing down at the bucket. While this was more difficult for dogs, overall they still could use this information to learn the name of the toy and later retrieve it when asked. “This shows us how flexible they are able to learn,” says Dror. “They can use different mechanisms and learn under different conditions.”
It’s been a while since running benchmarks of the Liquorix kernel as an enthusiast-tailored downstream version of the Linux kernel focused on responsiveness for gaming, audio/video production, and other creator/enthusiast workloads. In today’s article is a look at how the latest Liquorix kernel derived from Linux 6.18 is competing against the upstream Linux 6.18 LTS kernel on the same system.
YouTube is updating search filters so users can explicitly choose between Shorts and long-form videos. The change also replaces view-count sorting with a new “Popularity” filter and removes underperforming options like “Sort by Rating.” The Verge reports: Right now, a filter-less search shows a mix of longform and short form videos, which can be annoying if you just want to see videos in one format or the other. But in the new search filters, among other options, you can pick to see “Videos,” which in my testing has only showed a list of longform videos, or “Shorts,” which just shows Shorts.
YouTube is also removing the “Upload Date – Last Hour” and “Sort by Rating” filters because they “were not working as expected and had contributed to user complaints.” The company will still offer other “Upload Date” filters, like “Today,” “This week,” “This Month,” and “This Year,” and you can also find popular videos with the new “Popularity” filter, which is replacing the “View count” sort option. (With the new “Popularity” filter, YouTube says that “our systems assess a video’s view count and other relevance signals, such as watch time, to determine its popularity for that specific query.”)
My final moments at Valve headquarters for the reveal of Steam Frame last year were spent snapping the photos you see sprinkled throughout this article.
Right before that, overwhelmed by my desire to spend more time in Valve’s upcoming headset, I uttered my final question to their engineers.
Can you explain to “an idiot who doesn’t understand how the Internet works what the difference is between Flatpaks and APKs?”
“It’s pretty much the same thing,” a Valve representative answered. “Flatpak is for the Linux desktop. APK is for Android, but it’s similar. It’s a package that contains everything you need to run, that’s gonna run in a sandbox that you can uninstall later. So it’s an application package.”
I quickly recapped for the Valve VR team my formative experience with Windows circa 1995 or 1996. I was granted access to a Windows PC my dad brought home from work and shown a games folder full of a bunch of fun and simple 2D games to play. I also was shown how to get into DOS, and what command to type to launch games like Doom. Soon I was looking up cheat codes online and I quickly filled up the storage inside the PC with more games to play. One day, to make more space, I simply dragged the game files to the recycle bin and hit empty.
I can’t remember the exact sequence of events that followed but I remember a lot of crying accompanying intense fear of my father’s return from work at 5 p.m. The actions of a 10-year-old adding and deleting games left our family PC, meant just for business and school, bootable only in safe mode.
“We have those two tiers on Steam Deck. People that want to go set the OS to read/write mode and change system files, they can do that. But for folks that are just distributing apps that are prepackaged between themselves – the flatpak distribution format, which is similar to the sandbox we run games in, is pretty safe. Like it guarantees that if you remove it, your system is in the same state [as] before you installed it. So it really aims to provide an initial way to engage with a device that is more like an appliance…”
Thirty years have passed since my first direct interactions with the Internet and personal computers. As a father now in the 2020s, I found myself over Christmas break explaining to my teenager the difference between a Mac and a Steam Deck, and why some games they love run on one system and not the other. Our conversation revealed a spectrum to openness in computing with different amounts of power for physics and graphics. Some developers haven’t been paid to do the work to put a particular game on a specific system and make it run really well there.
I got that part across just fine. What I had trouble conveying is why openness and offline computing matters. I want an appliance that’s both hard to break and easy to use, and I want a playground for everyone at least as big as the one I had to explore in 1995.
I believe Valve is trying to make that happen in SteamOS with Linux.
Openness & Offline Computing Make Playgrounds For Discovery
A kid’s first computer is often an Internet-connected device like iPhone, iPad, Android, or a school-issued Chromebook, with all of them requiring an online account to operate. Caregivers prepare those accounts for the children and, over the last few years, platform companies providing online services have worked steadily to enable more stuff for kids and teens to do on these managed accounts.
After a very short time with the Steam Frame on my head I kicked off my shoes, reclined comfortably on a couch, and started searching the open Web using a browser in the Linux desktop with my voice. I have no idea what account was logged into the headset and it didn’t matter – I was doing whatever I wanted inside SteamOS and Linux.
I didn’t have time to try it at their offices, but the moment Valve sends out review units I’m going to click play on a great number of games in Steam loaded up on a 2 terabyte microSD card to see how they run.
As I create a mental picture of how much space I have to play inside Steam Frame in standalone, and before I connect the headset to another PC for what Valve itself describes as a streaming-first device, I’m going to open the Linux desktop and see what mischief I can manage. I’m looking to install apps like VLC for watching videos and Discord for chatting with friends, RetroArch for playing classic games, and Spotify for streaming music. That’s a lot I expect to find right out of the box in Flathub, described as the “Linux App Store”.
Popular apps on Flathub in January 2026.
The key takeaway here is that I expect to install more stuff to the Steam Frame headset using Linux directly on day one than I did in four days with Android XR. And I expect to be able to install a lot more to the headset overall than I have been able to in years of ownership of Vision Pro or Quest. Am I going to install my own operating system? Probably not. But am I going to personally screw up my installation of Linux so bad the system is going to need to restore me to factory settings? After this many years messing with computers, that’s probably my goal and I’m going to enjoy doing it.
Decades after my incident in 1990s Windows, as I was explaining what Mac and Steam Deck can do for a teenager, I found myself overwhelming them. My effort to making personal computing seem less daunting than it was for my forefathers was not going well. I told my eldest that if they break a Steam Deck by installing too many games and modding the system with software, I would be impressed.
Back in 2019, Facebook set up a call with me to discuss their “console-like curation strategy” for the Quest ecosystem, and today leadership at Meta have abandoned that strategy entirely for a policy of openness, as a great many developers struggle for sales inside of an ecosystem flush now with low quality projects. Meta seems to have meant for Horizon Worlds to be the floor of the Quest ecosystem, but requiring a Meta account and giving developers a cut of subscriptions is not providing stable footing to keep developers afloat, nor must-have reasons to put on a headset.
Now consider the game Valve is playing in comparison to that and the space their engineers are making for experimentation. Valve funded faceless developers worldwide to work on a series of key open source projects for the last decade forming the basis of SteamOS all trying to make it easier to play games with computers.
“A lot of what you’re experiencing here when you wear that and play a game is gonna be powered by a ton of the open source work we’ve been doing for the last decade or so, just ranging from SteamOS itself, which has elements dating all the way from the first version of SteamOS in 2013,” a Valve representative explained. “The way we’re running desktop games in this, the way we’re doing things like the graphics driver, it’s all open source. Proton is all open source. That’s been hundreds of people for a decade working on that stuff. And of course, SteamOS is based on Arch Linux. The desktop here is powered by Plasma, so it’s KDE Plasma, which is one of the major two desktops available on Linux. For the better part of a decade, we’ve been actually working directly with Plasma developers and funding them so they can improve the desktop with just gaming use cases in mind.”
“If folks on an experience that’s more curated and more closed off are having a good experience, that’s fine. But in general, we see people that are trying to experience a variety of games in different ways. There’s a bunch of stuff that they might wanna do that we haven’t thought of,” the Valve representative said. “And what we always observe is that there’s a ton of value that is usually distributed laterally in the community, where users between themselves will share stuff that will make the experience better. And that is only possible in an open platform. We don’t want all the value in a platform like that to be flowing up and down through us, and for us to be determining what’s a good experience or not on behalf of all those users that might have different opinions and different aspirations. So it’s really important for us to keep that open because it creates those kinds of effects that eventually leads to a better experience. Also anyone that’s using this stuff can also go and contribute patches and develop on it. And so we’re excited to be able to have stuff get even better because people now want to contribute to it.”
“In fact, a lot of the developers that are working on open source have started because they were users and they just want to improve a specific aspect and they go deep into it. The lines between user and developer has always been very blurry for us. We’ve always come from a world where some of our most popular game properties actually started out as mods. And moding on PC was always like a strong thing that we were always trying to support. Because so many good concepts and new game genres, free to play, MOBAs, all that stuff came through mods,, initially, right? If you look at the history of video games–different genres, different ways to experience games, different peripherals–a lot of it came from PC because PC was an open platform where different companies could innovate in different ways, but also users could mod. And people that created closed off platforms based on some of those concepts, they’re gonna take some of those concepts and kind of freeze them in time. And then PC’s gonna keep moving forward because it’s open and we have all this value. And we are just applying PC to VR, so it’s nothing new for us. We’ve always applied PC to VR. Some folks have opted to like branch it off in different directions, but I think we’re just doing the same thing as we’ve always been doing.”
Longtime Slashdot reader schwit1 shares a report from Reuters: Billionaire entrepreneur Elon Musk persuaded a judge on Wednesday to allow a jury trial on his allegations that ChatGPT maker OpenAI violated its founding mission in its high-profile restructuring to a for-profit entity. Musk was a cofounder of OpenAI in 2015 but left in 2018 and now runs an AI company that competes with it.
U.S. District Judge Yvonne Gonzalez Rogers in Oakland, California, said at a hearing that there was “plenty of evidence” suggesting OpenAI’s leaders made assurances that its original nonprofit structure was going to be maintained. The judge said there were enough disputed facts to let a jury consider the claims at a trial scheduled for March, rather than decide the issues herself. She said she would issue a written order after the hearing that addresses OpenAI’s bid to throw out the case.
[…] Musk contends he contributed about $38 million, roughly 60% of OpenAI’s early funding, along with strategic guidance and credibility, based on assurances that the organization would remain a nonprofit dedicated to the public benefit. The lawsuit accuses OpenAI co-founders Sam Altman and Greg Brockman of plotting a for-profit switch to enrich themselves, culminating in multibillion-dollar deals with Microsoft and a recent restructuring. OpenAI, Altman and Brockman have denied the claims, and they called Musk “a frustrated commercial competitor seeking to slow down a mission-driven market leader.”
Microsoft is also a defendant and has urged the judge to toss Musk’s lawsuit. A lawyer for Microsoft said there was no evidence that the company “aided and abetted” OpenAI.
OpenAI in a statement after the hearing said: “Mr Musk’s lawsuit continues to be baseless and a part of his ongoing pattern of harassment, and we look forward to demonstrating this at trial.”
Canonical is making it easier for ARM64 Ubuntu users like those on the NVIDIA DGX Spark to do a bit of gaming with Steam. Canonical engineers have assembled a Steam Snap for 64-bit ARM that comes complete with the FEX emulator for running Windows/Linux x86-based games on ARM64 Linux…
Discover the state of Enterprise Linux for networking in 2026. Compare RHEL 10, SLES 16, and Ubuntu alongside NOS leaders like SONiC and Nvidia Cumulus
Illinois Department of Human Services disclosed that a misconfigured internal mapping website exposed sensitive personal data for more than 700,000 Illinois residents for over four years, from April 2021 to September 2025. Officials say they can’t confirm whether the publicly accessible data was ever viewed. TechCrunch reports: Officials said the exposed data included personal information on 672,616 individuals who are Medicaid and Medicare Savings Program recipients. The data included their addresses, case numbers, and demographic data — but not individuals’ names. The exposed data also included names, addresses, case statuses, and other information relating to 32,401 individuals in receipt of services from the department’s Division of Rehabilitation Services.
An anonymous reader quotes a report from Wired: Google is putting even more generative AI tools into Gmail as part of its goal to further personalize user inboxes and streamline searches. On Thursday, the company announced a new “AI Inbox” tab, currently in a beta testing phase, that reads every message in a user’s Gmail and suggests a list of to-dos and key topics, based on what it summarizes. In Google’s example of what this AI Inbox could look like in Gmail, the new tab takes context from a user’s messages and suggests they reschedule their dentist appointment, reply to a request from their child’s sports coach, and pay an upcoming fee before the deadline. Also under the AI Inbox tab is a list of important topics worth browsing, nestled beneath the action items at the top. Each suggested to-do and topic links back to the original email for more context and for verification.
[…] For users who are concerned about their privacy, the information Google gleans by skimming through inboxes will not be used to improve the company’s foundational AI models. “We didn’t just bolt AI onto Gmail,” says Blake Barnes, who leads the project for Google. “We built a secure privacy architecture, specifically for this moment.” He emphasizes that users can turn off Gmail’s new AI tools if they don’t want them. At the same time Google announced its AI Inbox, the company made free for all Gmail users multiple Gemini features that were previously available only to paying subscribers. This includes the Help Me Write tool, which generates emails from a user prompt, as well as AI Overviews for email threads, which essentially posts a TL;DR summary at the top of long message threads. Subscribers to Google’s Ultra and Pro plans, which start at $20 a month, get two additional new features in their Gmail inbox. First, an AI proofreading tool that suggests more polished grammar and sentence structures. And second, an AI Overviews tool that can search your whole inbox and create relevant summaries on a topic, rather than just summarizing a single email thread.
Lumus got a major boost in brand recognition when one of its waveguides was selected for use in the Meta Ray-Ban Display glasses. But that already feels like old tech now because at CES 2026, the company brought some of its latest components to the show and based on what I saw, they seem poised to seriously elevate the optical quality of the next wave of high-end smartglasses.
When the Meta Ray-Ban Displays glasses came out, they wowed users as they were (and still are) one of a handful of smartglassess to feature a full-color in-lens display with at least a 20-degree field of view. But going by the specs on Lumus’ newest waveguides, we’re set for a major upgrade in terms of future capabilities.
If you look closely, you can see where light from the waveguide propagates into the one of the smartglasses’ lenses.
Sam Rutherford for Engadget
The first model I tried featured Lumus’ optimized Z-30 waveguides, which not only offer a much wider 30-degree FOV, they are also 30 percent lighter and 40 percent thinner than previous generations. On top of that, Lumus says they are also more power efficient with the waveguides capable of hitting more than 8,000 nits per watt. This is a big deal because smartglasses are currently quite limited by the size of batteries they can use, especially if you want to make them small and light enough to wear all day. When I tried them on, I was dazzled by both the brightness and sharpness I saw from the Z-30s despite them being limited to 720 x 720 resolution. Not only did the increase in FOV feel much larger than 10 degrees, colors were very rich, including white, which is often one of the most difficult shades to properly reproduce.
I had to take a photo of one of Lumus’ non-functioning smartglasses with the company’s 70-degree FOV waveguide, because two out of three of the working ones had already broke and the last one that I used was being held together by tape.
Sam Rutherford for Engadget
However, even after seeing how good that first model was, I was totally not prepared for Lumus’ 70-degree FOV waveguides. I was able to view some videos and a handful of test images and I was completely blown away with how much area they covered. It was basically the entire center portion of the lens, with only small unused areas around the corners. And while I did notice some pincushion distortion along the sides of the waveguide’s display, a Lumus representative told me that it will be possible to correct for that in final retail units. But make no mistake, these waveguides undoubtedly produced some of the sharpest, brightest and best-looking optics I’ve seen from any smartglasses, from either retail models or prototypes or. It almost made me question how much wider FOV these types of gadgets really need, though to be clear, I don’t think we’ve hit the point of diminishing returns yet.
This is one of Lumus’ thinnest waveguides measuring in at just 0.8mm.
Sam Rutherford for Engadget
Other advantages of Lumus’ geometric reflective waveguides include better overall efficiency than their refractive counterparts along with the ability to optically bond the displays to smartglasses lenses. That means unlike a lot of rivals, Lumus’ waveguides can be paired with transitions lenses instead of needing to resort to clip-on sunglass attachments when you go outside. Lumus also claims its designs also simplifies the manufacturing process, resulting in thinner waveguides (as small as 0.8mm) and generally higher yields.
Unfortunately, taking high-quality photos of content from smartglasses displays is incredibly challenging, especially when you’re using extremely delicate prototypes, so you’ll just have to take my word for now. But with Lumus in the process of ramping up production of its new waveguides with help from partners including Quanta and SCHOTT, it feels like there will be a ton of smartglasses makers clamoring for these components as momentum continues to build around the industry’s pick for the next “big” thing.
This article originally appeared on Engadget at https://www.engadget.com/wearables/lumus-brought-a-massively-wider-fov-to-smartglasses-at-ces-2026-233245949.html?src=rss
Paris Judicial Court ordered Google to block additional pirate sports-streaming domains at the DNS level, rejecting Google’s argument that enforcement should target upstream providers like Cloudflare first. “The blockade was requested by Canal+ and aims to stop pirate streams of Champions League games,” notes TorrentFreak. From the report: Most recently, Google was compelled to take action following a complaint from French broadcaster Canal+ and its subsidiaries regarding Champions League piracy.. Like previous blocking cases, the request is grounded in Article L. 333-10 of the French Sports Code, which enables rightsholders to seek court orders against any entity that can help to stop ‘serious and repeated’ sports piracy. After reviewing the evidence and hearing arguments from both sides, the Paris Court granted the blocking request, ordering Google to block nineteen domain names, including antenashop.site, daddylive3.com, livetv860.me, streamysport.org and vavoo.to.
The latest blocking order covers the entire 2025/2026 Champions League series, which ends on May 30, 2026. It’s a dynamic order too, which means that if these sites switch to new domains, as verified by ARCOM, these have to be blocked as well. Google objected to the blocking request. Among other things, it argued that several domains were linked to Cloudflare’s CDN. Therefore, suspending the sites on the CDN level would be more effective, as that would render them inaccessible. Based on the subsidiarity principle, Google argued that blocking measures should only be ordered if attempts to block the pirate sites through more direct means have failed.
The court dismissed these arguments, noting that intermediaries cannot dictate the enforcement strategy or blocking order. Intermediaries cannot require “prior steps” against other technical intermediaries, especially given the “irremediable” character of live sports piracy. The judge found the block proportional because Google remains free to choose the technical method, even if the result is mandated. Internet providers, search engines, CDNs, and DNS resolvers can all be required to block, irrespective of what other measures were taken previously. Google further argued that the blocking measures were disproportionate because they were complex, costly, easily bypassed, and had effects beyond the borders of France.
The Paris court rejected these claims. It argued that Google failed to demonstrate that implementing these blocking measures would result in “important costs” or technical impossibilities. Additionally, the court recognized that there would still be options for people to bypass these blocking measures. However, the blocks are a necessary step to “completely cease” the infringing activities.
Microsoft is embedding full e-commerce checkout directly into Copilot chats, letting users buy products without ever visiting a retailer’s website. “If checkout happens inside AI conversations, retailers risk losing direct customer relationships — while platforms like Microsoft gain leverage,” reports Axios. From the report: Microsoft unveiled new agentic AI tools for retailers at the NRF 2026 retail conference, including Copilot Checkout, which lets shoppers complete purchases inside Copilot without being redirected to a retailer’s website. The checkout feature is live in the U.S. with Shopify, PayPal, Stripe and Etsy integrations.
Copilot apps have more than 100 million monthly active users, spanning consumer and commercial audiences, according to the company. More than 800 million monthly active users interact with AI features across Microsoft products more broadly. Shopping journeys involving Copilot are 33% shorter than traditional search paths and see a 53% increase in purchases within 30 minutes of interaction, Microsoft says. When shopping intent is present, journeys involving Copilot are 194% more likely to result in a purchase than those without it.
This is a video of Morley Kert building a mini library in the wall of the landing of his stairs (technically his landlord’s stairs), complete with a secret door that takes a special combination to unlock. That’s whimsical. And you know how I feel about whimsy! “It’s your middle name.” Haha, I do tell people that at parties, but it isn’t really.
At CES 2026, NVIDIA finally revealed its long-awaited lineup of first-generation G-SYNC Pulsar monitors—and one of them is the MSI MPG 272QRF X36. Like the other new monitors revealed at NVIDIA’s conference, the MSI MPG 272QRF X36 is a 27-inch Rapid IPS monitor in 2560×1440 resolution, with full G-SYNC support up to 360 Hz. Where things get
An anonymous reader quotes a report from Ars Technica: The Federal Communications Commission plans to authorize a new category of wireless devices in the 6 GHz Wi-Fi band that will be permitted to operate at higher power levels than currently allowed. The FCC will also consider authorizing higher power levels for certain wireless devices that are only allowed to operate indoors. The FCC said it scheduled a vote for its January 29 meeting on an order “to create a new category of unlicensed devices… that can operate outdoors and at higher power than previously authorized devices.” These so-called Geofenced variable power (GVP) devices operating on the 6 GHz band will “support high data rates suitable for AR/VR, short-range hotspots, automation, and indoor navigation,” and “overcome limitations of previous device classes by allowing higher power and outdoor mobility,” the FCC said. They will be required to work with geofencing systems to avoid interference with fixed microwave links and radio astronomy observatories.
FCC Chairman Brendan Carr attributed the FCC’s planned action to President Trump in a press release titled, “President Trump Unleashes American Innovation With 6 GHz Win.” That’s consistent with Carr’s relatively new stance that the FCC takes orders from the president, despite his insisting during the Biden era that the FCC must operate independently from the White House. While many of Carr’s regulatory decisions have been criticized by consumer advocates, the 6 GHz action is an exception. Michael Calabrese, of New America’s Open Technology Institute, told Ars that “increasing the power levels for Wi-Fi connections to peripheral devices such as AR/VR is a big win for consumers” and a change that has been “long advocated by the Wi-Fi community.”
Carr said that the FCC “will vote on an order that expands unlicensed operations in the 6 GHz band so that consumers can benefit from better, faster Wi-Fi and an entirely new generation of wireless devices — from AR/VR and IoT to a range of innovative smart devices. [It] will do so through a set of forward-looking regulations that allow devices to operate at higher power while protecting incumbent users, including through geofencing systems.” […] A draft of the order said the planned “additional power will enable composite standard-power/LPI access points to increase indoor coverage and provide more versatility to American consumers.” The FCC will also seek comment on a proposal to authorize LPI access points on cruise ships.
In 2002, Bryan Fleming helped to create pcTattletale, software for monitoring phone and computer usage. Fleming’s tool would record everything done on the target device, and the videos would be uploaded to a server where they could be viewed by the pcTattletale subscriber.
This might sound creepy, but it can also be legal when used by a parent monitoring their child or an employee monitoring their workers. These are exactly the use cases that were once outlined on pcTattletale’s website, where the software was said to have “helped tens of thousands of parents stop their daughters from meeting up with pedophiles.” Businesses can “track productivity, theft, lost hours, and more.” Even “police departments use it for investigating.”
But this week, nearly 25 years after launching pcTattletale, Fleming pled guilty in federal court to having knowingly built and marketed software to spy on other adults without their consent. In other words, pcTattletale was often used to spy on romantic partners without their knowledge—and Fleming helped people do it.
This is a cross-section of a chainsaw doing its thing filmed at 20,000 frames per second. It’s so mesmerizing to watch it’s almost easy to forget that thing will take my leg off. Won’t you, chainsaw? WON’T YOU?! It really wants to, I can tell. I made the mistake of trying to carve the Thanksgiving turkey with a chainsaw and now it has a taste for thigh.