You are reading the article Smartphone Location Data Brokers Clash With Privacy Advocates Over Coronavirus updated in December 2023 on the website Katfastfood.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Smartphone Location Data Brokers Clash With Privacy Advocates Over Coronavirus
The collection and sale of smartphone location data has long been a source of controversy, especially given that most people don’t realise they are being tracked.Background
There have been a number of revelations about just how much location data is collected from smartphones, and how assurances about privacy may be far from true in practice.
[One phone] leaves a house in upstate New York at 7 a.m. and travels to a middle school 14 miles away, staying until late afternoon each school day. Only one person makes that trip: Lisa Magrin, a 46-year-old math teacher. Her smartphone goes with her […]
The app tracked her as she went to a Weight Watchers meeting and to her dermatologist’s office for a minor procedure. It followed her hiking with her dog and staying at her ex-boyfriend’s home, information she found disturbing.
The report found that such data may be passed to as many as 40 different companies, and retained for years.
A follow-up report last year found that just one database contained location data for some 12 million Americans, and the NYT was able to track the movements of identifiable people in sensitive occupations.
We followed military officials with security clearances as they drove home at night. We tracked law enforcement officers as they took their kids to school […]
We spotted a senior official at the Department of Defense walking through the Women’s March [and] to a high school, homes of friends, a visit to Joint Base Andrews, workdays spent in the Pentagon and a ceremony at Joint Base Myer-Henderson Hall with President Barack Obama in 2023.Smartphone location data for coronavirus tracking
Location data brokers say that the data they collect can help in the fight to limit the spread of the coronavirus, reports CNET.
Antonio Tomarchio, president of CuebIQ, says it isn’t providing data directly to any government agency, but he noted the COVID-19 Mobility Data Network has been in contact with policymakers dealing with the pandemic […]
“What we’re interested in, is trends in certain areas,” Tomarchio said. “If you have a big crowd in a park, this could be an indication that social distancing is not being respected.”
Critics, however, voice three objections. First, making the collection of such data seem acceptable because it’s being used to do good.
“This is an essentially corrupt ecosystem of companies spying on people without any meaningful understanding or meaningful consent,” said Jay Stanley, a senior policy analyst for the American Civil Liberties Union. “There is a danger that we allow these companies to validate these activities and to whitewash their reputation by repurposing their data for COVID-19.”
Second, the data may not be representative of the US population as a whole.
Certain apps and devices are more widely used by affluent and younger communities, Stacey Gray, senior counsel for the Future of Privacy Forum, told lawmakers. That could leave out some of the most vulnerable segments of the population.
“This includes underrepresentation of the elderly, very young or lowest-income people who do not own cellphones, or anyone who does not own a cellphone for other reasons, such as refusal on religious grounds,” Gray said in her testimony to Congress on Thursday.
Third, location data can be faked.
FTC: We use income earning auto affiliate links. More.
You're reading Smartphone Location Data Brokers Clash With Privacy Advocates Over Coronavirus
At this point if the news that your phone company has been selling customer location data to bounty hunters surprises you, you may need to catch up on the last few years of privacy revelations. If you’re just generally suspicious of places that collect your data, congratulations on being right (again). AT&T, Sprint, T-Mobile, and Verizon are currently being sued for selling customer location data to third parties known as data brokers, who then sell the data to other people with an interest in finding you – especially the “kinda-sorta” officials like bail bondsmen and bounty hunters.The short story
The Maryland-based ZLaw Firm filed a class action suit against the four big US mobile providers on May 2nd, 2023. They’re suing in the names of the company’s customers who were affected. Essentially, their lawsuit accuses these companies of providing access to real-time location data to companies that shouldn’t have had access. The suit covers a roughly four-year period from 2023 through 2023, though that doesn’t necessarily mean the activity was limited by these years.
Since it’s a class action lawsuit, affected individuals may be entitled to compensation, though more details on this will be forthcoming. The real goal here, however, is to get the big phone companies to stop selling sensitive customer information – or at least to be more careful with it.What exactly has been going on?
Back in 2023 there was another scandal where it came out that Securus, a prison technology company, was giving low-level law enforcement officers access to the location of pretty much every phone on all of the major carriers. That level of surveillance usually requires a warrant in the US, but Securus was using an intermediary company called LocationSmart, which pretty much anyone could sign up for, even on a free trial account, to get access to the location of most cell phones being used in the U.S.
Generally, the data in question here isn’t your GPS data – it’s your approximate location as determined by the strength of different cell tower signals, which is something phone companies really need in order to provide service. However, some of the data available to bounty hunters was occasionally from GPS, meaning they could get your location down to a few meters.
A lot of other stuff happened around the 2023 location issue (including Securus being hacked, meaning access to their real-time tracking tools could have been in anyone’s hands for a while), but the reason it’s important to this story is that every carrier involved promised to fix these sorts of loopholes and stop giving sensitive data to sketchy third parties. That apparently hasn’t been going so well, since Motherboard was actually able to identify the general path the data took.
Here’s how the process seems to have been working:
A data aggregator (Zumigo, in this case) buys customer data from a telecom company. They then use this data for any number of things, including fraud prevention and possibly marketing.
Zumigo then sells off your data to other services, including, in this case, a company called Microbilt, which uses the access it buys from Zumigo to sell services, like background or credit check, or tracking people who might break their bail. Microbilt actually maintains price lists for services like these.
Whoever is using the service, like bounty hunters or landlords, pays for your cell phone data and gets to use it.
If all that seems a little Byzantine, it is, but though your data is bouncing through a lot of different companies, it’s all coming straight from the phone provider at the center. If they close off access to third parties who are misusing this data, there won’t be a problem anymore – but it seems like they aren’t.Bounty hunters aren’t out to get me, why should I worry?
Okay, you’re not Han Solo, and your location data probably isn’t being pulled by anyone in particular, even though you did shoot first. There have been cases, though, of people with access to these tools using them for more off-the-clock activities, such as tracking girlfriends. That’s not something that’s likely to affect the general public, but the fact remains that we now have tools that allow certain people to find you pretty much anywhere, whether it’s a potential employer checking how often you visit a psychiatrist or a marketing company trying to build a better profile on you.
It’s not just tracking individual movements, either: location data that is gathered and analyzed in bulk can help identify trends in how people move. When anonymously gathered and properly used, this type of data can be very helpful in designing better systems, but when it’s firehosed out without much consideration as to whose hands it ends up in, it’s a breach of trust and just generally a bad idea.
Image credits: Sierpiński Pyramid from Above
Subscribe to our newsletter!
Our latest tutorials delivered straight to your inbox
Sign up for all newsletters.
According to the fourteen consumer groups who made the request to Google in a letter on Tuesday, if Google does not comply they will review their options, including legal action.
Many apps used in schools compromise student data. Here’s one way schools and districts can develop a comprehensive plan to keep that information safe.
Many school districts have seen an explosion in the number of apps and websites that teachers use with students in classrooms. Although digital tools can enhance learning, the expansion in technology has resulted in an increased number of cyber attacks and privacy breaches. Districts have the power and responsibility to promote student safety by ensuring the protection of student data privacy.
According to the Student Privacy Primer from the Student Privacy Compass, “Student data privacy refers to the responsible, ethical, and equitable collection, use, sharing, and protection of student data.” This data includes personally identifiable information such as a student’s name, date of birth, Social Security number, and email address.
Although there are certainly edtech companies that perform due diligence when it comes to protecting data, others do not use best practices. A recent report from the nonprofit Internet Safety Labs found that 96 percent of apps used regularly in K–12 schools have data-sharing practices that “are not adequately safe for children.”
Many of those apps shared children’s personal information with third-party marketers, often without the knowledge or consent of schools. There have also been some recent instances of edtech company data breaches that have shown those companies are not taking the safety precautions they claim to be taking, such as encrypting student information. It is imperative that districts take steps to protect student information.
6 steps to build a culture of student data privacy
1. Identify a point person. As districts begin to think about student privacy, the first step is to identify someone who can become the primary contact on student data privacy questions and decisions. This might be someone at the district office level (such as a director of technology or tech coach), or it might be someone at the school level (such as an assistant principal or instructional coach). This person can also provide teachers with guidance and best practices.
2. Develop a communication strategy. It is essential to create a plan that effectively communicates the district’s data privacy policies and procedures to all stakeholders (for instance, educators, parents, and students). Clearly communicating the plan at each step of the process will help build the relationships necessary to create an environment in which student data privacy is prioritized. If you need help getting started, check out the Student Privacy Communications Toolkit from the Student Privacy Compass.
3. Identify websites and apps being used in the district. Start with the apps that your district is paying for or encouraging teachers to use. Reach out to curriculum specialists, coaches, and anyone else that regularly provides professional development to teachers. I recommend starting with a small batch (10–20) of the most commonly used apps as you first start to develop procedures. Later, as you fine-tune your approval process, you might decide to utilize outside services to identify additional apps that are being used in the classroom.
For example, our district uses GoGuardian, which operates as an extension on student Chromebooks and monitors their browsing activity. The GoGuardian Director Overview dashboard shows us which apps, extensions, and websites are being used the most by our students. Another tool you can use is the LearnPlatform Inventory Dashboard. This is a browser extension pushed out to district devices that populates a dashboard showing all the edtech tools that teachers and students in your district are using.
4. Develop an understanding of pertinent laws and regulations. To effectively address student data privacy, the technology point person will need to be familiar with related legal requirements. One important federal law is the Family Educational Rights and Privacy Act (FERPA), which requires schools to protect the privacy of student education records.
Another federal law that applies here is the Children’s Online Privacy Protection Act (COPPA). COPPA requires operators of commercial websites and online services to obtain consent from parents before collecting personal information from children under the age of 13. While this rule applies to companies, not schools, it is still important to understand because schools can give consent on a parent’s behalf.
Depending on your state, you might also need to do some research about state laws that govern data privacy; plenty of resources exist to help you get started with FERPA, COPPA, and additional state laws.
5. Vet apps for compliance with laws and data privacy. Each app should go through a standardized vetting procedure. I would strongly recommend putting a team together to perform this vetting so that you get diverse perspectives and input from a variety of stakeholders.
Reviewing the TOS and Privacy Policies can feel overwhelming, especially when you are first getting started. Fortunately, the U.S. Department of Education released guidance to help with this evaluation process.
Another helpful (and free) resource is the Common Sense Privacy Program. Common Sense evaluates the privacy policies of individual apps and scores them in 10 different areas, including Data Collection, Data Sharing, and Data Security.
6. Create a list of approved apps to share with teachers. An important part of creating a culture of student data privacy is getting teachers on board, as they are the people making daily decisions about which apps to use with their students. One way you can help them make safe choices is to create a list of approved apps that have been vetted by a person (or group of people) trained to read through Privacy Policies and Terms of Service notices. With so many apps out there to choose from, teachers often have a choice between two that do similar things. A list can help them choose the app that does a better job of protecting data while still allowing them to use technology to enhance learning for students.
For teachers who would like to learn more about student data privacy, provide some resources. Here are two free training courses:
Creating a culture of student data privacy is challenging, but it is worth the effort to protect our students. Remember that you don’t have to do everything all at once. Take that first step and be a privacy leader!
In the digital age, QR codes are ubiquitous. Everywhere you look, from billboards to business cards, it seems as if there is an endless stream of these black-and-white patterns that promise to deliver additional information with just the scan of a single smartphone camera. But how do these QR codes impact our data privacy? What happens when we use them without considering the implications?
As more of us embrace this technology, it’s important to understand how QR codes can affect our personal data and whether they offer a secure solution for exchanging sensitive information. In this blog post, we’ll explore how QR codes can affect your data privacy so that you can make informed decisions about their use.
What Is a QR Code and How Does It Work?
A Quick Response (QR) code is a two-dimensional barcode containing up to 4,296 alphanumeric characters. When scanned by a smartphone or tablet camera, the information stored in the code is decoded, and the user can access its content. This technology has been around since 1994, but only recently has it become an increasingly popular way of quickly sharing information. According to the QR code study, the percentage of QR code scans has increased by 26 percent in the last two years, showing how more people are embracing this technology.
QR codes work by encoding information within a pattern. When the code is scanned, the user can access its contents without having to manually type in a URL or search for it online. This makes it an incredibly convenient and efficient way of sharing data with others but also raises some important privacy concerns.
What Data Can Be Stored in a QR Code?
QR codes can store a variety of data, from text to website links, videos, and images. Any information encoded digitally can be stored in a QR code, allowing for the quick and easy exchange of sensitive data between users.
However, this convenience comes with risks. Unlike other data transmission methods, such as email or SMS, it can be difficult to tell where the data is being sent and who can access it. This lack of transparency makes it hard to verify that your information is secure and protected from unauthorized access.
It’s also important to note that most QR codes are static – meaning they don’t update automatically when the content changes – so if you’re using QR codes to share sensitive information like passwords, it’s important to double-check that the code contains the latest version.
How Do QR Codes Impact Our Data Privacy?
QR codes can impact data privacy by exposing our personal information to security threats. When using these codes to transfer information, it’s important to consider the potential risks and understand how to protect your data. By knowing the potential risks, you can safely scan the QR code without any doubts.
1. Potential for Data Collection and Tracking
2. Potential for Data Breaches
QR codes can potentially be vulnerable to data breaches as there is no way for users to know who has access to the information stored in them. Additionally, if a QR code isn’t updated regularly with new or updated information, it could give hackers access to old or out-of-date data that could be used maliciously.
3. Potential for Malware Attacks
Hackers can also use QR codes to spread malicious code or malware. For example, they can create a QR code that links to a website that contains malware, and when the code is scanned, the malware is downloaded onto the user’s device. This type of attack can be difficult to detect as malicious QR codes often look just like regular ones.
4. Potential for Phishing Attacks
QR codes can be used to deceive users into sharing sensitive information with malicious actors. In a phishing attack, hackers create fake QR codes that link to sites that look like legitimate ones but are actually run by attackers.
When users scan these codes, they may unknowingly disclose personal information, such as passwords or banking details, to attackers. This attack is particularly hazardous because users are usually unaware that they are revealing sensitive information to malicious individuals.
How to Protect Your Data from QR Code Risks
As mentioned above, QR codes can be risky when it comes to data privacy. To ensure your data remains secure, you should take the following steps.
1. Only Scan Verified QR Codes
Whenever you scan a QR code, make sure it is from a trusted source. Do not scan any codes that are suspicious or appear to have been altered in any way. Additionally, check the website or app before scanning to make sure it is legitimate and secure.
2. Enable Two-Factor Authentication
It’s essential to prioritize the safety of your data, and implementing two-factor authentication is an efficient method to achieve this. Once activated, you’ll have to provide an additional code or password before accessing any data related to a QR code. This extra security measure can help prevent harmful activities like phishing attacks.
3. Use HTTPS Encryption
Whenever possible, use HTTPS encryption when scanning QR codes. This type of encryption helps protect data being sent through the code, making it difficult for third parties to intercept and access your information.
4. Use Genuine QR Code Scanner Apps
It’s vital to keep your device updated with the latest security patches to lower the risk of malware attacks and other security threats. Apart from that, it’s also recommended to have antivirus software installed to safeguard against potential malicious codes hidden in QR codes.
Are QR codes safe?
QR codes can be safe if you take precautions to reduce the risks associated with them. This includes verifying that the code links to a reputable website, enabling two-factor authentication, and using HTTPS encryption when scanning the code. Additionally, make sure to never scan suspicious or altered codes and use genuine QR code scanner apps from trusted companies.
How QR code can be misused?
QR codes have the potential to be misused in a number of ways. This includes malicious actors creating fake QR codes that link to sites that look like legitimate ones but are actually run by attackers. Additionally, hackers may use QR codes to install malware or spyware on users’ devices.
Overall, QR codes provide a convenient way to quickly transfer data, but they also carry potential risks. To protect your privacy, it’s essential to know the potential risks associated with using QR codes and take steps to minimize them.
This includes being vigilant when scanning QR codes and verifying that the code is linking to a legitimate website. Doing so can help ensure that your data remains safe and secure.
According to lawsuit filed in California, OpenAI used personal information including medical records, data on children and even accessed private conversations to train its AI models.
Not just ChatGPT, other tools such as Dall-E, Codex and Whisper were trained using data that was extracted in violation of privacy and security of real people.
ChatGPT responds to questions like a human being, writes essays like real people by emulating their experiences and even generates content as if it were penned by a historic figure. All of this comes from data that it has access to, and now its creator OpenAI has been accused of stealing personal information of real people, as per the lawsuit.
What does the lawsuit say?
The petitioners have remained anonymous since only their initials are mentioned in the 157-page lawsuit, but they have accused ChatGPT of posing a catastrophic risk. They have alleged that all that personally identifiable information was stolen from millions of people, to train the AI into being more human-like.
Basically OpenAI is accused of simply harvesting and using any piece of personal information that users provide on other platforms, without seeking consent or even approaching any individual. This means that ChatGPT and Dall-E are essentially generating profits based on the private lives of people who aren’t even aware of that.
The plaintiffs also mentioned that without the massive data pile, extracted unethically, OpenAI wouldn’t have been able to create generative AI that is bringing in billions in revenue. Physical location, chats, contact information, search history and even information from browsers had been taken without the knowledge of the users.
What do the plaintiffs demand?
According to the lawsuit, things get worse since OpenAI introduced its products to the market without even deploying the necessary safeguards to protect private data.
It calls for OpenAI to be transparent about its data collection methods, a compensation for the stolen information and an option for people to opt out of its data harvesting drive.
What is OpenAI’s track record on data privacy?
Before this reports have emerged that OpenAI also used data from YouTube, run by its rival Google, in order to train ChatGPT and other generative AI tools. The reports claimed that ChatGPT had secretly used YouTube since it is the single largest source of images, text transcripts and audio.
The allegations had come months after Google itself was accused of using data from ChatGPT to train its own AI bot called Bard.
ChatGPT had also been banned in Italy over data privacy concerns, as the government sought to prevent it from using the personal details of millions of citizens. But the ban was lifted months later, after Italian regulators were satisfied with the safeeguards that OpenAI had put in place.
But that wasn’t the end for OpenAI’s troubles, since Japan also issued a warning to the firm over data privacy concerns related to ChatGPT.
As for the lawsuit, OpenAI only states that it will collect email, payment information and name of its users whenever necessary. But the firm has never mentioned anything about the data sourced from other corners of the internet to train its model in the first place.
Update the detailed information about Smartphone Location Data Brokers Clash With Privacy Advocates Over Coronavirus on the Katfastfood.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!