By continuing to use the site or forum, you agree to the use of cookies, find out more by reading our GDPR policy

FLoC is a recent Google proposal that would have your browser share your browsing behavior and interests by default with every site and advertiser with which you interact. Brave opposes FLoC, along with any other feature designed to share information about you and your interests without your fully informed consent. To protect Brave users, Brave has removed FLoC in the Nightly version of both Brave for desktop and Android. The privacy-affecting aspects of FLoC have never been enabled in Brave releases; the additional implementation details of FLoC will be removed from all Brave releases with this week’s stable release. Brave is also disabling FLoC on our websites, to protect Chrome users learning about Brave. Companies are finally being forced to respect user privacy (even if only minimally), pushed by trends such as increased user education, the success of privacy-first tools (e.g., Brave among others), and the growth of legislation including the CCPA and GPDR. In the face of these trends, it is disappointing to see Google, instead of taking the present opportunity to help design and build a user-first, privacy-first Web, proposing and immediately shipping in Chrome a set of smaller, ad-tech-conserving changes, which explicitly prioritize maintaining the structure of the Web advertising ecosystem as Google sees it. For the Web to be trusted and to flourish, we hold that much more is needed than the complex yet conservative chair-shuffling embodied by FLoC and Privacy Sandbox. Deeper changes to how creators pay their bills via ads are not only possible, but necessary. The success of Brave’s privacy-respecting, performance-maintaining, and site-supporting advertising system shows that more radical approaches work. We invite Google to join us in fixing the fundamentals, undoing the harm that ad-tech has caused, and building a Web that serves users first. The rest of this post explains why we believe FLoC is bad for Web users, bad for sites, and a bad direction for the Web in general. FLoC harms privacy directly and by design: FLoC shares information about your browsing behavior with sites and advertisers that otherwise wouldn’t have access to that information. Unambiguously, FLoC tells sites about your browsing history in a new way that browsers categorically do not today. Google claims that FLoC is privacy improving, despite intentionally telling sites more about you, for broadly two reasons, each of which conflate unrelated topics. First, Google says FLoC is privacy preserving compared to sending third-party cookies. But this is a misleading baseline to compare against. Many browsers don’t send third-party cookies at all; Brave hasn’t ever. Saying a new Chrome feature is privacy-improving only when compared to status-quo Chrome (the most privacy-harming popular browser on the market), is misleading, self-serving, and a further reason for users to run away from Chrome. Second, Google defends FLoC as not privacy-harming because interest cohorts are designed to be not unique to a user, using k-anonymity protections. This shows a mistaken idea of what privacy is. Many things about a person are i) not unique, but still ii) personal and important, and shouldn’t be shared without consent. Whether I prefer to wear “men’s” or “women’s” clothes, whether I live according to my professed religion, whether I believe vaccines are a scam, or whether I am a gun owner, or a Brony-fan, or a million other things, are all aspects of our lives that we might like to share with some people but not others, and under our terms and control. FLoC adds an enormous amount of fingerprinting surface to the browser, as the whole point of the feature is for sites to be able to distinguish between user interest-group cohorts. This undermines the work Brave is doing to protect users against browser fingerprinting and the statistically inferred cohort tracking enabled by fingerprinting attack surface. Google’s proposed solution to the increased fingerprinting risk from FLoC is both untestable and unlikely to work. Google proposes using a “privacy budget” approach to prevent FLoC from being used to track users. First, Brave has previously detailed why we do not think a “budget” approach is workable to prevent fingerprinting-based tracking. We stand by those concerns, and have not received any response from Google, despite having raised the concerns over a year ago. And second, Google has yet to specify how their “privacy budget” approach will work; the approach is still in “feasibility-testing” stages. Google is aware of some of these concerns, but gives them shallow treatment in their proposal. For example, Google notes that some categories (sexual orientation, medical issues, political party, etc.) will be exempt from FLoC, and that they are looking into other ways of preventing “sensitive” categories from being used in FLoC. Google’s approach here is fundamentally wrong. First, Google’s approach to determining whether a FLoC cohort is sensitive requires (in most cases) Google to record and collect that sensitive cohort in the first place! A system that determines whether a cohort is “sensitive” by recording how many people are in that sensitive cohort doesn’t pass the laugh test. Second, and more fundamental, the idea of creating a global list of “sensitive categories” is illogical and immoral. Whether a behavior is “sensitive” varies wildly across people. One’s mom may not find her interest in “women’s clothes” a private part of her identity, but one’s dad might (or might not! but, plainly, Google isn’t the appropriate party to make that choice). Similarly, an adult happily expecting a child might not find their interest in “baby goods” particularly sensitive, but a scared and nervous teenager might. More broadly, interests that are banal to one person, might be sensitive, private or even dangerous to another person. The point isn’t that Google’s list of “sensitive cohorts” will be missing important items. The point, rather, is that a “privacy preserving system” that relies on a single, global determination of what behaviors are “privacy sensitive,” fundamentally doesn’t protect privacy, or even understand why privacy is important. Visit OUR FORUM for more.

A timely reminder has been shared of how the current global chip famine has affected processor prices, in this case specifically for the AMD Ryzen 9 5950X. While retailers who have tried to stay close to MSRP are invariably out of stock, those with Ryzen 9 5950X CPUs to sell are mostly setting astronomical price tags for the Zen 3 powerhouse. Those looking to snag a 16-core, 32-thread AMD Ryzen 9 5950X for a reasonable price will already be aware of how difficult a task that has become. The 2021 global chip shortage, caused by a combination of the coronavirus pandemic, companies shifting to a work from home strategy, and previously unpredictable rocketing demand, has led to much-wanted PC parts, especially high-end units like the Ryzen 9 5950X CPU and GeForce RTX 3090 GPU, being sold at greatly inflated prices. A recent Reddit post by a Redditor called locutusuk68 has triggered quite a discussion on the popular social website on this processor-pricing theme, with an accompanying screenshot revealing how the UK retailer Overclockers is currently selling the top-end Zen 3 processor for a staggering £959.99 (US$1,316/AUD$1,726). The MSRP for the Ryzen 9 5950X AMD is US$799, while PC builders in the UK may have expected to pay in the region of £750 (US$1,028/AUD$1,349) for the chip. In fact, one of the country’s largest electronics retailers, Currys, has the 16-core part listed for that fair price along with a price match guarantee. Of course, it’s out of stock. Shopping around does not really deliver much relief, because those stores that look like they might offer reasonable deals may either be unfamiliar (Box - £849.99) or have incredibly limited stock (CCL - £899). A listing on eBay for multiple units of the Ryzen 9 5950X has a “buy it now” offer at £1,085.49 (US$1,488/AUD$1,952) per part, while a retailer called OnBuy takes the biscuit with a price tag of £1,099.95 (US$1,508/AUD$1,978). In fact, just for added shock value, there is even a mention of AMD’s Ryzen 9 5950X being priced at an insane £1,480.72 (US$2,030/AUD$2,662). Of course, this same discouraging picture for desktop DIYers exists in other markets: Best Buy also has a price match guarantee for the Zen 3 part at US$799 but is sold out, and if you take a look at Amazon there is sometimes stock listed as available – but in some cases, you have to be willing to part with US$1,288.99. However, retailers that are reliant on low unit sales are just utilizing an age-old business tactic of price hiking when demand exceeds supply. An accusatory finger can be pointed at Team Red, but did AMD really reckon on a million Ryzen 5000 unit sales within a few weeks of release? Supply is apparently ramping up, so arguably the best thing desktop PC builders can do right now is holding on. Eventually, supply will catch up with demand and prices will fall…or Zen 4 might even be around by the time that happens. Follow this and more by visiting OUR FORUM.

Today, Google launched an “origin trial” of Federated Learning of Cohorts (aka FLoC), its experimental new technology for targeting ads. A switch has silently been flipped in millions of instances of Google Chrome: those browsers will begin sorting their users into groups based on behavior, then sharing group labels with third-party trackers and advertisers around the web. A random set of users have been selected for the trial, and they can currently only opt-out by disabling third-party cookies. Although Google announced this was coming, the company has been sparse with details about the trial until now. We’ve pored over blog posts, mailing lists, draft web standards, and Chromium’s source code to figure out exactly what’s going on. EFF has already written that FLoC is a terrible idea.  Google’s launch of this trial—without notice to the individuals who will be part of the test, much less their consent—is a concrete breach of user trust in the service of a technology that should not exist. Below we describe how this trial will work, and some of the most important technical details we’ve learned so far. FLoC is supposed to replace cookies. In the trial, it will supplement them. Google designed FLoC to help advertisers target ads once third-party cookies go away. During the trial, trackers will be able to collect FLoC IDs in addition to third-party cookies. That means all the trackers who currently monitor your behavior across a fraction of the web using cookies will now receive your FLoC cohort ID as well. The cohort ID is a direct reflection of your behavior across the web. This could supplement the behavioral profiles that many trackers already maintain. As described above, a random portion of Chrome users will be enrolled in the trial without notice, much less consent. Those users will not be asked to opt-in. In the current version of Chrome, users can only opt-out of the trial by turning off all third-party cookies. Future versions of Chrome will add dedicated controls for Google’s “privacy sandbox,” including FLoC. But it’s not clear when these settings will go live, and in the meantime, users wishing to turn off FLoC must turn off third-party cookies as well. Turning off third-party cookies is not a bad idea in general. After all, cookies are at the heart of the privacy problems that Google says it wants to address. But turning them off altogether is a crude countermeasure, and it breaks many conveniences (like single sign-on) that web users rely on. Many privacy-conscious users of Chrome employ more targeted tools, including extensions like Privacy Badger, to prevent cookie-based tracking. Unfortunately, Chrome extensions cannot yet control whether a user exposes a FLoC ID. FLoC calculates a label based on your browsing history. For the trial, Google will default to using every website that serves ads — which is the majority of sites on the web. Sites can opt-out of being included in FLoC calculations by sending an HTTP header, but some hosting providers don’t give their customers direct control of headers. Many site owners may not be aware of the trial at all. This is an issue because it means that sites lose some control over how their visitors’ data is processed. Right now, a site administrator has to make a conscious decision to include code from an advertiser on their page. Sites can, at least in theory, choose to partner with advertisers based on their privacy policies. But now, information about a user’s visit to that site will be wrapped up in their FLoC ID, which will be made widely available (more on that in the next section). Even if a website has a strong privacy policy and relationships with responsible advertisers, a visit there may affect how trackers see you in other contexts. For complete details visit OUR FORUM.

You may have a roommate you have never met. And even worse, they are nosy. They track what you watch on TV, they track when you leave the lights on in the living room, and they even track whenever you use a key fob to enter the house. This is the reality of living in a “smart home”: the house is always watching, always tracking, and sometimes it offers that data up to the highest bidder – or even to police. This problem stems from the US government buying data from private companies, a practice increasingly unearthed in media investigations though still quite shrouded in secrecy. It’s relatively simple in a country like the United States without strong privacy laws: approach a third-party firm that sells databases of information on citizens, pay them for it, and then use the data however deemed fit. The Washington Post recently reported – citing documents uncovered by researchers at the Georgetown school of law – that US Immigration and Customs Enforcement has been using this very playbook to buy up “hundreds of millions of phone, water, electricity, and other utility records while pursuing immigration violations”. “Modern surveillance” might evoke images of drones overhead, smartphones constantly pinging cell towers, and facial recognition deployed at political protests. All of these are indeed unchecked forms of 21st-century monitoring, often in uniquely concerning ways. Facial recognition, for instance, can be run continuously, from a distance, with minimal human involvement in the search and surveillance process. But the reporting on Ice’s use of utility records is a powerful reminder that it’s not just flashy gadgets that increasingly watch our every move; there’s also a large and ever-growing economy of data brokerage, in which companies and government agencies, law enforcement included, can buy up data on millions of Americans that we might not even think of as sensitive. Privacy protections in the United States are generally quite weak; when it comes to police purchases of private data, they are completely absent. This is one of the oddities of trying to update 18th-century rights to address 21st-century threats. At the time of the country’s founding, the framers wrote about protecting things like our homes, our papers, and other physical objects. Flash forward to today, and these categories fail to capture most of our intimate data, including the ins and outs of your daily routine captured by a nosy electronic roommate – or a data broker. Courts have been slow to update these legal categories to include computers and other electronic records. But while we now have the same protections for our laptops as our paper records, the matter gets much less clear in the cloud. The documents and data we access remotely every day can end up in a gray zone outside the clear protections afforded in our homes and offices. Whether it’s our financial records, our phone records or the countless other records held about us by third parties, this data is generally open to police even without a warrant. This so-called “third-party doctrine” has come under more scrutiny in recent years, and there is some hope the courts will catch up with the changes in technology. Until they do, however, nearly all the data held about us by private companies remains completely exposed. Hence why utility records might end up in the hands of law enforcement via a private company, or how smart-home devices like thermostats and fridges could very well be sending off your data to be sold away. While the recent Washington Post story focused on data brokerage and utility records, the smart-home phenomenon makes this problem of data sale and unchecked surveillance even worse. These gadgets are sold as flashy, affordable, and convenient. But despite all that has been written about the speculative benefits of the so-called Internet of Things, these technologies are often terribly insecure and may provide few to no details to consumers on how they’re protecting our data. Ring, Amazon’s home security system, has documented surveillance ties with law enforcement; that is but one example. The more that smart devices are marketed in the absence of strong federal privacy protections, the more likely it’s not just about hackers half a world away controlling your home’s temperature – it’ll also be about arrests and deportations with the help of smart-home data. Read more on OUR FORUM.

A database containing the stolen phone numbers of more than half a billion Facebook users is being freely traded online. A database containing the phone numbers of more than half a billion Facebook users is being freely traded online, and Facebook is trying to pin the blame on everyone but themselves. A blog post titled “The Facts on News Reports About Facebook Data,” published Tuesday evening, is designed to silence the growing criticism the company is facing for failing to protect the phone numbers and other personal information of 533 million users after a database containing that information was shared for free in low-level hacking forums over the weekend, as first reported by Business Insider. Facebook initially dismissed the reports as irrelevant, claiming the data was leaked years ago and so the fact it had all been collected into one uber database containing one in every 15 people on the planet—and was now being given away for free—didn’t really matter. Facebook has become accustomed to dealing with multiple massive privacy breaches in recent years, and data belonging to hundreds of millions of its users has been leaked or stolen by hackers. But, instead of owning up to its latest failure to protect user data, Facebook is pulling from a familiar playbook: just like it did during the Cambridge Analytica scandal in 2018, it’s attempting to reframe the security failure as merely a breach of its terms of service. So instead of apologizing for failing to keep users’ data secure, Facebook’s product management director Mike Clark began his blog post by making a semantic point about how the data was leaked. “It is important to understand that malicious actors obtained this data not through hacking our systems but by scraping it from our platform prior to September 2019,” Clark wrote. This is the identical excuse given in 2018, when it was revealed that Facebook had given Cambridge Analytica the data of 87 million users without their permission, for use in political ads. Clark goes on to explain that the people who collected this data—sorry, “scraped” this data—did so by using a feature designed to help new users find their friends on the platform. “This feature was designed to help people easily find their friends to connect with on our services using their contact lists,” Clark explains. The contact importer feature allowed new users to upload their contact lists and match those numbers against the numbers stored on people’s profiles. But like most of Facebook’s best features, the company left it wide open to abuse by hackers. “Effectively, the attacker created an address book with every phone number on the planet and then asked Facebook if his ’friends’ are on Facebook,” security expert Mikko Hypponen explained in a tweet. Clark’s blog post doesn’t say when the “scraping” took place or how many times the vulnerability was exploited, just that Facebook fixed the issue in August 2019. Clark also failed to mention that Facebook was informed of this vulnerability way back in 2017, when Inti De Ceukelaire, an ethical hacker from Belgium, disclosed the problem to the company. Facebook has been collecting users’ phone numbers for a decade, initially claiming that it was part of the platform’s security protocols. But in reality, Facebook was simply using that data to help it sell more ads and target more users — a breach of users’ trust that the Federal Trade Commission (FTC) decided was worth a $5 billion fine in 2019. But for users whose phone numbers were being traded freely online, possibly the most aggravating part of Clark’s post is when he puts the onus on users to protect the data that Facebook itself required users to hand over in the name of “security.” “While we addressed the issue identified in 2019, it’s always good for everyone to make sure that their settings align with what they want to be sharing publicly,” Clark wrote. “In this case, updating the ‘How People Find and Contact You’ control could be helpful. We also recommend people do regular privacy checkups to make sure that their settings are in the right place, including who can see certain information on their profile and enabling two-factor authentication.” It’s an audacious move for a company worth over $300 billion, with $61 billion cash on hand, to ask its users to secure their own information, especially considering how byzantine and complex the company’s settings menus can be. Thankfully for the half a billion Facebook users who’ve been impacted by the breach, there’s a more practical way to get help. Troy Hunt, a cybersecurity consultant and founder of Have I Been Pwned has uploaded the entire leaked database to his website that allows anyone to check whether their phone number is listed in the leaked database. While Facebook is attempting to downplay the seriousness of the leak, the decision about how serious this is does not lie with the company alone. In Ireland, the Data Protection Commissioner (DPC)—which has the power to levy a fine of up to 4% of global turnover or around $3.5 billion—has slammed the company for failing to inform it of the breach.Turn to OUR FORUM to learn more.

Auto manufacturers and other companies are hoping that the global chip shortage will end soon, but snarled semiconductor supply chains may not untangle until next year. The mess began when the pandemic upended the market for semiconductors. As demand for cars plummeted, automakers slashed their orders. But at the same time, demand for chips that power laptops and data centers skyrocketed. That bifurcation shifted the market, and when car and truck sales rebounded, semiconductor manufacturers rushed to meet demand. Soon, though, shortages of key components emerged. The industry is known for planning—and for its long lead times—so it could take a while for the chip market to sort itself out. “There seems to be a broad consensus that it will stabilize by the end of the year,” Chris Richard, principal in Deloitte’s supply chain and network operations practice, told Ars. “But if I go back to 2008 and the financial crisis, it was a couple years after the rebound started before everything smoothed out again.” It’s not just manufacturing capacity that’s hard to come by. Shortages of wafers and packaging substrates are compounding the problem. Those have hit the automotive sector especially hard, Richard added. A drought in Taiwan and a fire at a Japanese fab threaten to add to the industry’s woes. Many of the chips in shortest supply, including those destined for the automotive sector, are made using older processes. These mature nodes are typically well understood, and many fabs run them near the limits of their capacity, meaning there’s not a lot of slack in the system. In other industries, shortages like this can be solved more easily—customers can simply place orders with other manufacturers to meet temporary spikes in demand. But automakers are unlikely to dial up a new supplier, since it takes about three to six months, sometimes more, to qualify chips from a new factory. And semiconductor manufacturers are unlikely to build new fabs to meet what might prove to be temporary surges in demand. In the end, the best bet for both sides is to push for more production at existing fabs.  Chip manufacturers have responded by ramping up production on their existing lines where they can, but that’s difficult in fabs that are already running above 90 percent capacity. To free up more production, they’re trying to tweak production rates on existing machines, request early deliveries for tools they’ve already ordered, and squeeze more of those tools into space-constrained factories. “It’s just a big scramble,” Richard said. For many car companies, chip problems have been made worse by the fact that the companies are often several steps removed from semiconductor manufacturers. Over the years, as cars have incorporated more advanced technologies, automakers have outsourced the production of more and more parts to suppliers. That distant relationship stands in sharp contrast with computer and electronics companies, which often work directly with semiconductor companies. Together, they command about 60 to 70 percent of the chip market, while automotive customers account for less than 10 percent. The current chip crisis and the trend toward electrification are factors likely to change how car companies interact with semiconductor manufacturers. While today’s fossil fuel-powered vehicles use plenty of chips, electric vehicles promise to use more, especially as advanced driver assistance systems, or ADAS, become more widespread in the coming years. The coincidence of the chip shortage and electrification will change how auto executives view their relationship with semiconductor manufacturers, Richard said. Automakers will likely work much more closely with chip companies in the future, even if the resulting car parts are made by several different suppliers.For more navigate to OUR FORUM.