By continuing to use the site or forum, you agree to the use of cookies, find out more by reading our GDPR policy

Cyberpunk 2077 is an Early Access game. It wasn't labeled that way at launch, but it should have been (and while it may not have sold quite so many copies, it probably would have cut down on the outrage from players at the state of it). Cyberpunk 2077 was far from finished when CDPR pushed it out the door a couple of years too early, and despite a massive patch released earlier this week that made a number of improvements, it's still far from finished today. Cyberpunk 2077's 1.2 patch, released earlier this week, weighs in at 33GB and includes nearly 500 fixes for the PC version of the game. That's a hefty patch, and it contains tons of important fixes for quests, gameplay systems, and the many, many, many bugs Cyberpunk 2077 shipped with. Despite the surprisingly long list of fixes and tweaks, the experience post-patch is ultimately about the same. After playing a couple of hours with the 1.2 patches, I can't say I really noticed much of a difference. Yes, the patch made it so cops and police drones spawn a bit further away when you commit a crime, but that doesn't really make their response feel any less ridiculous, especially when you're in a remote area with hardly anyone around and can see them blip into the world. And despite the swarms of teleporting police, they're still incredibly easy to evade because they give up the moment you're out of sight and never jump into cars to pursue you. Post-patch, I still get the bug where I'm suddenly thrown hundreds of meters away from the spot I was standing. I still regularly see NPCs floating in the air. I still see those ridiculous 2D cars that are supposed to simulate traffic at a distance, and I still see them in places where there's no need to simulate traffic at a distance. I still can't get the second part of the vending machine quest to kick off, despite the quest marker pointing me to the spot I need to go to kick it off. I don't have any mod conflicts, either—this is a completely clean install of the patched game. It's just still heavily broken. The first thing I did after installing the patch was run to the spot outside V's apartment, where on day one I witnessed cars repeatedly and hilariously smashing into a barricade on the sidewalk. They're still doing that. There are fewer cars on the road now, which makes it less noticeable, but every car that does go down that road still smashes immediately into that barrier and sends hunks-o-car flying through the air. It's still funny to me, but it demonstrates just how much more there is to fix. (Though at least now V sleeps on their bed like an actual human being would.) Some players are having an easier time post-patch, reporting that driving is much improved on PC using the keyboard now that there's a steering sensitivity slider. Some say performance has improved as well, with more consistent fps and quicker load times. Naturally, as happens on PC with patches for just about every game ever made, other players are reporting a worse experience. More crashes, lower fps, and new quest bugs in place of old ones. The subreddit is still packed with glitch gifs, as it has been since day one. I do think Cyberpunk 2077 is still worth playing, both when it launched and right now. There are lots of great characters and some really interesting quests. It looks amazing and it's a beautiful world (if a rarely rewarding one) to explore. Yes, the glitches and bugs and half-assed systems like police responders can grating and frustrating, but the goofy physics bugs can be amusing, too, and at times the characters and story are engaging enough that even distracting bugs don't completely ruin them. Learn more by visiting OUR FORUM.

The newest method of infecting your computer is remarkably old-fashioned: It uses a telephone call. Online researchers are documenting a new malware campaign they've dubbed "BazarCall." One of its primary malware "payloads" is the BazarLoader remote-access Trojan, which can give a hacker full control over your PC and be used to install more malware. The attack starts with an email notifying you that a free trial subscription for a medical service that you've supposedly signed up for is about to run out, and your credit card will be charged in a few days — at $90 a month or some other ridiculous rate. The subject line may read "Thank you for using your free trial," "Do you want to extend your free period," or something similar, according to The Record and Bleeping Computer. Naturally, you're wondering what the hell this email is, but you're pretty sure you don't want to be paying for something you never agreed to. Fortunately, the message provides a phone number you can call to cancel the subscription, plus a subscriber ID number that you can refer to during the call. You've heard of, and maybe even seen, phishing emails that want you to click on a link, but then take you a site that asks for your password or tries to install something on your computer. But there's no link in this email. It seems safe. And what harm can come from calling a phone number? So you call. You're placed on hold. You wait for a couple of minutes. And then a helpful call-center operator — he or she sounds suspiciously like someone who'd be part of a tech-support scam — comes on the line and listens to your questions about the email. The operator asks for the subscriber ID mentioned in the email. Now here's the key thing. That subscriber ID is very important because it lets the crooks know who you are — and many of their targets are people who work in specific companies. "They will be able to identify the company that got that email when you give them a valid customer [ID] number on the phone," Binary Defense security expert Randy Pargman told Bleeping Computer. "But if you give them a wrong number they will just tell you that they canceled your order and it’s all good without sending you to the website." Here's a YouTube video illustrating the entire process. The interaction with the call-center operator starts about 2 minutes and 45 seconds in. Anyway, the customer-service rep puts you back on hold for a bit to check your subscriber ID, then comes back to tell you who signed up and provided a credit card for this subscription — and it's someone who's not you. There must be a mistake. The friendly customer-support person tells you that because this concerns a medical service, you've got to fill out some forms online to cancel the subscription. He sends you to a professional-looking website, where you can continue the cancellation process. There are at least five possible websites, again listed here. The one we saw all looked the same, but someone took a lot of effort to make each site look decent. The websites have FAQs, privacy statements, terms of use and even contact information listing street addresses of Los Angeles office towers and southern California phone numbers. We called a couple of the listed phone numbers but got nowhere. We also discovered that all five websites we visited have domains that were registered last week using the same alias and the same Russian email address. Back on the customer-support call, the rep directs you to the site's signup page, where you can click Unsubscribe. However, the Unsubscribe field doesn't ask for your name or your email address. Instead, it again asks for the subscription ID number found in the original email notification you received. Click Submit on the Unsubscribe dialogue box, and your browser prompts you to allow download of a Microsoft Excel spreadsheet or Word document. The customer-support rep says you must download, open and digitally "sign" this document to cancel the subscription. Now, Microsoft Office files downloaded from the internet are so dangerous that Windows itself "sandboxes" them so that they can't run macros — little mini-programs — without your permission. But the customer-support rep you have on the phone insists that you click the yellow bar that appears across the top of this Excel or Word file to enable macros so that you can "sign" the document. We have a lot more posted on OUR FORUM.

An upgraded variant of Purple Fox malware with worm capabilities is being deployed in an attack campaign that is rapidly expanding. Purple Fox, first discovered in 2018, is malware that used to rely on exploit kits and phishing emails to spread. However, a new campaign taking place over the past several weeks -- and which is ongoing -- has revealed a new propagation method leading to high infection numbers. In a blog post on Tuesday, Guardicore Labs said that Purple Fox is now being spread through "indiscriminate port scanning and exploitation of exposed SMB services with weak passwords and hashes." Based on Guardicore Global Sensors Network (GGSN) telemetry, Purple Fox activity began to climb in May 2020. While there was a lull between November 2020 and January 2021, the researchers say overall infection numbers have risen by roughly 600% and total attacks currently stand at 90,000. The malware targets Microsoft Windows machines and repurposes compromised systems to host malicious payloads. Guardicore Labs says a "hodge-podge of vulnerable and exploited servers" is hosting the initial malware payload, many of which are running older versions of Windows Server with Internet Information Services (IIS) version 7.5 and Microsoft FTP. Infection chains may begin through internet-facing services containing vulnerabilities, such as SMB, browser exploits sent via phishing, brute-force attacks, or deployment via rootkits including RIG. As of now, close to 2,000 servers have been hijacked by Purple Fox botnet operators. Guardicore Labs researchers say that once code execution has been achieved on a target machine, persistence is managed through the creation of a new service that loops commands and pulls Purple Fox payloads from malicious URLs. The malware's MSI installer disguises itself as a Windows Update package with different hashes, a feature the team calls a "cheap and simple" way to avoid the malware's installers being connected to one another during investigations. In total, three payloads are then extracted and decrypted. One tampers with Windows firewall capabilities and filters are created to block a number of ports -- potentially in a bid to stop the vulnerable server from being reinfected with other malware. An IPv6 interface is also installed for port scanning purposes and to "maximize the efficiency of the spread over (usually unmonitored) IPv6 subnets," the team notes, before a rootkit is loaded and the target machine is restarted. Purple Fox is loaded into a system DLL for execution on boot. Purple Fox will then generate IP ranges and begin scans on port 445 to spread. "As the machine responds to the SMB probe that's being sent on port 445, it will try to authenticate to SMB by brute-forcing usernames and passwords or by trying to establish a null session," the researchers say. The Trojan/rootkit installer has adopted steganography to hide local privilege escalation (LPE) binaries in past attacks. To learn more visit OUR FORUM.

Looking to use your phone in an emergency? Modern smartphones and smartwatches allow you to set certain features that will ping your last known location to emergency contacts in a situation where you’re unable to talk on the phone. Both Apple and Google have baked these features into the respective iOS and Android platforms, and we’re seeing more and more wearable manufacturers include the features too. Why would you want to set up emergency SOS location tracking? There’s a variety of scenarios where you may not be able to talk on a phone, but you will be able to find a way to send your location to trusted individuals. You can also have an easy way to directly call the emergency services through these features too, so they’re worth setting up for when you may need them in the future. This guide will teach you how to set up the equivalent features on your iPhone, Android phone, or an alternative such as wearables from Garmin and Apple. Not all fitness trackers or wearables sport these features, but most smartphones do. Emergency SOS is already available when you take an iPhone out of the box, but there are some ways you can set it up to work better. It works in all countries, but in some places, you may only be able to choose one particular emergency service. First off, making an emergency services call is simple from an iPhone, but the way it works differs depending on the model of iPhone you have. If you own an iPhone 8 or later (that’s if your phone came out after 2017) you can hold down the side button and the volume buttons. Then, you’ll find a slider on the screen that says “Emergency SOS”. If you drag this across, it’ll make an immediate call to the emergency services. If you can’t slide this across, continue to hold down the buttons and you’ll find your phone makes an alert noise with a countdown. That countdown will finish with the phone calling the emergency services, so this is particularly useful if you can’t take your phone out of a pocket. We would encourage you to set up emergency contacts (more on that below) as it will then message your contacts immediately afterward with your location information and more. Why would you want an emergency contact? First off, it can help emergency services identify who to contact, and on Apple devices, these people will immediately receive a message of your location after your call with the emergency services. To set this up, click on the Health app and press on the profile picture. In here, you’ll find an option called Medical ID and at the bottom of the page, you’ll find an option called emergency contacts. Here is where you can enter the information of the contact, a relationship, and their phone number as well. Tap on done afterward, and you’ve set up your emergency contact. You can have several of these on your iPhone at one time. On Android phones, these features differ depending on the manufacturer. You can often find the information you need by searching in your phone’s Settings for phrases such as SOS messages or simply the word emergency. For example, Samsung phones have a feature called Send SOS Messages that allows you to press the side key three times to automatically message someone with your location. It will automatically attach pictures using your rear and front camera, as well as an audio recording of the moments before the message was sent. For more detailed instructions on various devices visit OUR FORUM.

Today, researchers have exposed common weaknesses lurking in the latest smart sex toys that can be exploited by attackers. As more as more adult toy brands enter the market, given that the COVID-19 situation has led to a rapid increase in sex toy sales, researchers believe a discussion around the security of these devices is vital. In examples provided by the researchers, technologies like Bluetooth and inadequately secured remote APIs make these IoT personal devices vulnerable to attacks that go beyond just compromising user privacy. ESET security researchers Denise Giusto Bilić and Cecilia Pastorino have shed light on some weaknesses lurking in smart sex toys, including the newer models. The main concern highlighted by the researchers is, that newer wearables like smart sex toys are equipped with many features such as online conferencing, messaging, internet access, and Bluetooth connectively. This increased connectivity also opens doors to these devices being taken over and abused by attackers. The researchers explain most of these smart devices feature two channels of connectivity. Firstly, the connectivity between a smartphone user and the device itself is established over Bluetooth Low Energy (BLE), with the user running the smart toy's app. Secondly, the communication between a remotely located sexual partner and the app controlling the device is established over the internet. To bridge the gap between one's distant lover and the sex toy user, smart sex toys, like any other IoT device, use servers with API endpoints handling the requests. "In some cases, this cloud service also acts as an intermediary between partners using features like chat, videoconferencing and file transfers, or even giving remote control of their devices to a partner," explained Bilić and Pastorino in a report. But, the researchers state that the information processed by sex toys consists of highly sensitive data such as names, sexual orientation, gender, a list of sexual partners, private photos and videos, among other pieces, which, if leaked can adversely compromise a user's privacy. This is especially true if sextortion scammers get creative after getting their hands on such private information. More importantly, though, the researchers express concern over these IoT devices being compromised and weaponized by the attackers for malicious actions, or to physically harm the user. This can, for example, happen if the sex toy gets overheated. "And finally, what are the consequences of someone being able to take control of a sexual device without consent, while it is being used, and send different commands to the device?" "Is an attack on a sexual device sexual abuse and could it even lead to a sexual assault charge?" Bilić and Pastorino further stress. To demonstrate the seriousness of these weaknesses, the researchers conducted proof-of-concept exploits on the Max by Lovense and We-Vibe Jive smart sex toys. Both of these devices were found to use the least secure "Just Works" method of Bluetooth pairing. Using the BtleJuice framework, and two BLE dongles, the researchers were able to demonstrate how a Man-in-the-Middle (MitM) attacker could take control of the devices and capture the packets. The attacker can then re-broadcast these packets after tampering with them to change settings like vibration mode, intensity, and even inject their other commands. Likewise, the API endpoints used to connect a remote lover (sexual partner) to the user make use of a token which wasn't awfully hard to brute-force. Want more visit OUR FORUM.

After spending more than a decade building up massive profits off targeted advertising, Google announced on Wednesday that it’s planning to do away with any sort of individual tracking and targeting once the cookie is out of the picture. In a lot of ways, this announcement is just Google’s way of doubling down on its long-running pro-privacy proclamations, starting with the company’s initial 2020 pledge to eliminate third-party cookies in Chrome by 2022. The privacy-protective among us can agree that killing off these sorts of omnipresent trackers and targeters is a net good, but it’s not time to start cheering the privacy bona fides of a company built on our data—as some were inclined to do after Wednesday’s announcement. As the cookie-kill date creeps closer and closer, we’ve seen a few major names in the data-brokering and adtech biz—shady third parties that profit off of cookies—try to come up with a sort of “universal identifier” that could serve as a substitute once Google pulls the plug. In some cases, these new IDs rely on people’s email logins that get hashed and collectively scooped up from tons of sites across the web. In other cases, companies plan to flesh out the scraps of a person’s identifiable data with other data that can be pulled from non-browser sources, like their connected television or mobile phones. There are tons of other schemes that these companies are coming up with amid the cookie countdown, and apparently, Google’s having none of it. “We continue to get questions about whether Google will join others in the ad tech industry who plan to replace third-party cookies with alternative user-level identifiers,” David Temkin, who heads Google’s product management team for “Ads Privacy and Trust,” wrote in a blog post published on Wednesday. In response, Temkin noted that Google doesn’t believe that “these solutions will meet rising consumer expectations for privacy, nor will they stand up to rapidly evolving regulatory restrictions.” Based on that, these sorts of products “aren’t a sustainable long term investment,” he added, noting that Google isn’t planning on building “alternate identifiers to track individuals” once the cookie does get quashed. What Google does plan on building, though, is its own slew of “privacy-preserving” tools for ad targeting, like its Federated Learning of Cohorts, or FLoC for short. Just to get people up to speed: While cookies (and some of these planned universal ID’s) track people by their individual browsing behavior as they bounce from site to site, under FLoC, a person’s browser would take any data generated by that browsing and basically plop it into a large pot of data from people with similar browsing behavior—a “flock,” if you will. Instead of being able to target ads against people based on the individual morsels of data a person generates, Google would allow advertisers to target these giant pots of aggregated data. We’ve written out our full thoughts on FLoC before—the short version is that like the majority of Google’s privacy pushes that we’ve seen until now, the FLoC proposal isn’t as user-friendly as you might think. For one thing, others have already pointed out that this proposal doesn’t necessarily stop people from being tracked across the web, it just ensures that Google’s the only one doing it. This is one of the reasons that the upcoming cookiepocolypse has already drawn scrutiny from competition authorities over in the UK. Meanwhile, some American trade groups have already loudly voiced their suspicions that what Google’s doing here is less about privacy and more about tightening its obscenely tight grip on the digital ad economy. To learn more turn your attention to OUR FORUM.