You are reading the article Google Responds To Us Doj’s Antitrust Lawsuit updated in December 2023 on the website Katfastfood.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Google Responds To Us Doj’s Antitrust Lawsuit
A lawsuit was filed in a federal court in Washington, D.C., on Tuesday. Google is being accused of maintaining a monopoly through several exclusive business contracts and agreements that lock out competition.
— Justice Department (@TheJusticeDept) October 20, 2023
This is the most significant action the federal government has taken against a tech company in the past 20 years.
Google has issued an official response to the lawsuit calling it “deeply flawed,” and claiming it does “nothing to help consumers.”
“Today’s lawsuit by the Department of Justice is deeply flawed. People use Google because they choose to, not because they’re forced to, or because they can’t find alternatives.”
Google goes on to say that, rather than helping consumers, the lawsuit will artificially promote “lower-quality” alternative search engines.
In addition, Google hypothesizes the lawsuit will also raise phone prices and make it harder for people to get the search services they want to use.
Here’s a complete rundown of the DOJ’s accusations and Google’s response to each of them.Department of Justice Accusations / Google’s Response
Accusation: Google has agreements and contracts with businesses to promote its services.
“Yes, like countless other businesses, we pay to promote our services, just like a cereal brand might pay a supermarket to stock its products at the end of a row or on a shelf at eye level.”
On Android devices, Google has promotional agreements with carriers and device makers to feature its services.
This helps keep the operating system free, Google says, as well as reduce the price people pay for Android phones.
Rival apps are often preloaded onto Android devices as well.
Google notes that it doesn’t come preloaded on Windows devices, where Bing is the default search engine.
Accusation: Google pays Apple billions of dollars to be the default search engine on iPhones.
“Apple features Google Search in its Safari browser because they say Google is “the best.” This arrangement is not exclusive—our competitors Bing and Yahoo! pay to prominently feature, and other rival services also appear.”
“The bigger point is that people don’t use Google because they have to, they use it because they choose to.
This isn’t the dial-up 1990s, when changing services was slow and difficult, and often required you to buy and install software with a CD-ROM.
Today, you can easily download your choice of apps or change your default settings in a matter of seconds—faster than you can walk to another aisle in the grocery store.”
The lawsuit also alleges that Americans aren’t proficient enough with technology to install and use Google alternatives.
Google says that’s not true while pointing out many of the world’s most popular apps aren’t preloaded – such as Spotify, Instagram, Snapchat, Amazon, and Facebook.What Happens Now?
The historic lawsuit could stretch on for several years, according to technology policy experts at The New York Times.
For comparison, a similar lawsuit against Microsoft took over a decade to settle.
The investigation process which lead to this lawsuit took over a year on its own.
So we’re unlikely to get a satisfying conclusion any time soon, which makes this a particularly interesting story to follow.
It’s worth noting that Attorney General William P. Barr put immense pressure on the Justice Department to file this lawsuit before Election Day.
However, given how long the lawsuit may stretch on, reporters at the New York Times suggest it’s not politically motivated.
Sources: Google, The New York Times
You're reading Google Responds To Us Doj’s Antitrust Lawsuit
As expected, the United States Department of Justice has filed an antitrust lawsuit against Apple and five major book publishers over an alleged price fixing related to digital books. Three publishers are reportedly close to settling with Uncle Sam in order to dodge costly and lengthy litigation and avoid risking potentially high damages.
Bloomberg first broke the news on Twitter, writing that the price-fixing antitrust lawsuit has been filed against Apple and publisher Hachette. The full report says the government sued publishers Hachette SA, HarperCollins, Macmillan, Penguin and Simon & Schuster in New York district court, claiming collusion over e-book pricing.
Apple and Macmillan, which have refused to engage in settlement talks with the Justice Department, deny they colluded to raise prices for digital books, according to people familiar with the matter. They will argue that pricing agreements between Apple and publishers enhanced competition in the e-book industry, which was dominated by chúng tôi Inc. (AMZN)
Named publishers with the exception of Macmillan, Penguin and Apple are reportedly willing to settle to avoid costly legal fees. The Justice Department said it would announce an “unspecified” antitrust settlement today.
The Wall Street Journal has key excerpts from the lawsuit. The Verge also has a handy analysis for those eager to lear more.
Here’s what the government wants:
The government is seeking a settlement that would let Amazon and other retailers return to a wholesale model, where retailers decide what to charge customers, the people said. A settlement could also void so-called most-favored nation clauses in Apple’s contracts that require book sellers to provide the maker of the iPad with the lowest prices they offer competitors, the people said.
Reuters reported yesterday that the U.S. Department of Justice was gearing up for a big antitrust lawsuit against Apple and five major book publishers. Uncle Sam apparently thinks that Apple and publishers convoluted to fix prices in the e-book industry.
Apple signs up e-book titles for its iBook Store under the so-called “agency” model which allows publishers to set the prices themselves, with Apple taking their usual 30 percent cut.
Amazon, on the other hand, operates under the “wholesale” model, meaning they control prices of e-books. The problem is, Amazon has been abusing its dominant market position to limit competition by often selling e-books at a loss.
Rival e-book stores had to follow suit and cut their prices in order to remain competitive. As a result of this forced discounting, underpriced e-books often barely cover fees for writers, editors, marketing and so forth.
And because there’s little money to be made under Amazon’s wholesale model, it’s impossible for anyone but the big boys to make a decent living producing e-books.
Apple’s model lets publishers small and big alike to set their prices and therefore control their own destiny. Apple also requires publishers to price their titles on competing stores the same or higher than on the iBook Store. As a result, major publishers doing business with the iBook Store have actually raised their prices across platforms.
Higher-priced e-books on the iBook Store led to Amazon raising prices of Kindle books above their $9.99 ceiling or risk losing content found on Apple’s store.
This happens to be a violation of federal antitrust laws.
What a bunch of you-know-what.
Uncle Sam is suing Apple for letting publishers control their prices and protect their business model from Amazon? Wholesale model may be beneficial to consumers, but only in short-term. Aggressive discounting and running business at a loss has never been a prudent business strategy in my book.
In the long run, Amazon’s rules of the game actually discourage individual writers, educators and wordsmiths around the world to continue putting out great content and make a living in the process.
Why would anyone bother authoring an e-book if they are unable to cover the costs?
OpenAI CEO Sam Altman responded to a request by the Federal Trade Commission as part of an investigation to determine if the company “engaged in unfair or deceptive” practices relating to privacy, data security, and risks of consumer harm, particularly related to reputation.
it is very disappointing to see the FTC’s request start with a leak and does not help build trust.
that said, it’s super important to us that out technology is safe and pro-consumer, and we are confident we follow the law. of course we will work with the FTC.
— Sam Altman (@sama) July 13, 2023
The FTC has requested information from OpenAI dating back to June 2023, as revealed in a leaked document obtained by the Washington Post.
The subject of investigation: did OpenAI violate Section 5 of the FTC Act?
The documentation OpenAI must provide should include details about large language model (LLM) training, refinement, reinforcement through human feedback, response reliability, and policies and practices surrounding consumer privacy, security, and risk mitigation.
we’re transparent about the limitations of our technology, especially when we fall short. and our capped-profits structure means we aren’t incentivized to make unlimited returns.
— Sam Altman (@sama) July 13, 2023The FTC’s Growing Concern Over Generative AI
The investigation into a major AI company’s practices comes as no surprise. The FTC’s interest in generative AI risks has been growing since ChatGPT skyrocketed into popularity.Attention To Automated Decision-Making Technology
In April 2023, the FTC published guidance on artificial intelligence (AI) and algorithms, warning companies to ensure their AI systems comply with consumer protection laws.
It noted Section 5 of the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act as laws important to AI developers and users.
FTC cautioned that algorithms built on biased data or flawed logic could lead to discriminatory outcomes, even if unintended.
The FTC outlined best practices for ethical AI development based on its experience enforcing laws against unfair practices, deception, and discrimination.
Recommendations include testing systems for bias, enabling independent audits, limiting overstated marketing claims, and weighing societal harm versus benefits.
“If your algorithm results in credit discrimination against a protected class, you could find yourself facing a complaint alleging violations of the FTC Act and ECOA,” the guidance warns.AI In Check
The FTC reminded AI companies about its AI guidance from 2023 in regards to making exaggerated or unsubstantiated marketing claims regarding AI capabilities.
In the post from February 2023, the organization warned marketers against getting swept up in AI hype and making promises their products cannot deliver.
Common issues cited: claiming that AI can do more than current technology allows, making unsupported comparisons to non-AI products, and failing to test for risks and biases.
The FTC stressed that false or deceptive marketing constitutes illegal conduct regardless of the complexity of the technology.
The reminder came a few weeks after OpenAI’s ChatGPT reached 100 million users.Deepfakes And Deception
About a month later, in March, the FTC warned that generative AI tools like chatbots and deepfakes could facilitate widespread fraud if deployed irresponsibly.
It cautioned developers and companies using synthetic media and generative AI to consider the inherent risks of misuse.
The agency said bad actors can leverage the realistic but fake content from these AI systems for phishing scams, identity theft, extortion, and other harm.
While some uses may be beneficial, the FTC urged firms to weigh making or selling such AI tools given foreseeable criminal exploitation.
It also warned against using synthetic media in misleading marketing and failing to disclose when consumers interact with AI chatbots versus real people.
If infected, users should update security tools and operating systems, then follow steps to remove malware or recover compromised accounts.Federal Agencies Unite To Tackle AI Regulation
Near the end of April, four federal agencies – the Consumer Financial Protection Bureau (CFPB), the Department of Justice’s Civil Rights Division (DOJ), the Equal Employment Opportunity Commission (EEOC), and the FTC – released a statement on how they would monitor AI development and enforce laws against discrimination and bias in automated systems.
The agencies asserted authority over AI under existing laws on civil rights, fair lending, equal opportunity, and consumer protection.
Together, they warned AI systems could perpetuate unlawful bias due to flawed data, opaque models, and improper design choices.
The partnership aimed to promote responsible AI innovation that increases consumer access, quality, and efficiency without violating longstanding protections.AI And Consumer Trust
In May, the FTC warned companies against using new generative AI tools like chatbots to manipulate consumer decisions unfairly.
After describing events from the movie Ex Machina, the FTC claimed that human-like persuasion of AI chatbots could steer people into harmful choices about finances, health, education, housing, and jobs.
Though not necessarily intentional, the FTC said design elements that exploit human trust in machines to trick consumers constitute unfair and deceptive practices under FTC law.
With generative AI adoption surging, the FTC alert puts companies on notice to proactively assess downstream societal impacts.
Those rushing tools to market without proper ethics review or protections would risk FTC action on resulting consumer harm.An Opinion On The Risks Of AI
FTC Chair Lina Khan argued that generative AI poses risks of entrenching significant tech dominance, turbocharging fraud, and automating discrimination if unchecked.
In a New York Times op-ed published a few days after the consumer trust warning, Khan said the FTC aims to promote competition and protect consumers as AI expands.
Khan warned a few powerful companies controlled key AI inputs like data and computing, which could further their dominance absent antitrust vigilance.
She cautioned realistic fake content from generative AI could facilitate widespread scams. Additionally, biased data risks algorithms that unlawfully lock out people from opportunities.
While novel, Khan asserted AI systems are not exempt from FTC consumer protection and antitrust authorities. With responsible oversight, Khan noted that generative AI could grow equitably and competitively, avoiding the pitfalls of other tech giants.AI And Data Privacy
In June, the FTC warned companies that consumer privacy protections apply equally to AI systems reliant on personal data.
In complaints against Amazon and Ring, the FTC alleged unfair and deceptive practices using voice and video data to train algorithms.
FTC Chair Khan said AI’s benefits don’t outweigh the privacy costs of invasive data collection.
The agency asserted consumers retain control over their information even if a company possesses it. Strict safeguards and access controls are expected when employees review sensitive biometric data.
For kids’ data, the FTC said it would fully enforce the children’s privacy law, COPPA. The complaints ordered the deletion ill-gotten biometric data and any AI models derived from it.
The message for tech firms was clear – while AI’s potential is vast, legal obligations around consumer privacy remain paramount.Generative AI Competition
Near the end of June, the FTC issued guidance cautioning that the rapid growth of generative AI could raise competition concerns if key inputs come under the control of a few dominant technology firms.
The agency said essential inputs like data, talent, and computing resources are needed to develop cutting-edge generative AI models. The agency warned that if a handful of big tech companies gain too much control over these inputs, they could use that power to distort competition in generative AI markets.
The FTC cautioned that anti-competitive tactics like bundling, tying, exclusive deals, or buying up competitors could allow incumbents to box out emerging rivals and consolidate their lead.
The FTC said it will monitor competition issues surrounding generative AI and take action against unfair practices.
The aim was to enable entrepreneurs to innovate with transformative AI technologies, like chatbots, that could reshape consumer experiences across industries. With the right policies, the FTC believed emerging generative AI can yield its full economic potential.Suspicious Marketing Claims
In early July, the FTC warned of AI tools that can generate deepfakes, cloned voices, and artificial text increase, so too have emerged tools claiming to detect such AI-generated content.
However, experts warned that the marketing claims made by some detection tools may overstate their capabilities.
The FTC cautioned companies against exaggerating their detection tools’ accuracy and reliability. Given the limitations of current technology, businesses should ensure marketing reflects realistic assessments of what these tools can and cannot do.
Furthermore, the FTC noted that users should be wary of claims that a tool can catch all AI fakes without errors. Imperfect detection could lead to unfairly accusing innocent people like job applicants of creating fake content.What Will The FTC Discover?
The FTC’s investigation into OpenAI comes amid growing regulatory scrutiny of generative AI systems.
As these powerful technologies enable new capabilities like chatbots and deepfakes, they raise novel risks around bias, privacy, security, competition, and deception.
OpenAI must answer questions about whether it took adequate precautions in developing and releasing models like GPT-3 and DALL-E that have shaped the trajectory of the AI field.
The FTC appears focused on ensuring OpenAI’s practices align with consumer protection laws, especially regarding marketing claims, data practices, and mitigating societal harms.
For now, the FTC’s investigation underscores that the hype surrounding AI should not outpace responsible oversight.
Robust AI systems hold great promise but pose risks if deployed without sufficient safeguards.
Major AI companies must ensure new technologies comply with longstanding laws protecting consumers and markets.
Featured image: Ascannio/Shutterstock
Investors were clearly a little worried about yesterday’s announcement of a ‘broad’ antitrust investigation into Apple and other tech giants. AAPL stock dropped 1%, wiping $6.8B from its market cap, with similar falls for Alphabet, Amazon, and Facebook.
Altogether, the lost value totaled $33B, but experts say that there’s little to worry about — and indeed, the announcement could even be considered good news for the companies …
The Justice Department didn’t name names, stating that at this stage it is a broad look at tech giants “to understand whether there are antitrust problems that need addressing.” However, given that the terms of the antitrust investigation are to look at the behaviors of “dominant tech firms,” it is clear that Apple is among those companies in the spotlight.
The Justice Department is opening a broad antitrust review into whether dominant technology firms are unlawfully stifling competition […]
The review is geared toward examining the practices of online platforms that dominate internet search, social media and retail services, the department said.
Many are seeing this as a threat to Apple, including Apple Card partner bank Goldman Sachs, which this month warned investors to avoid tech stocks which become subject to antitrust lawsuits.
However, experts cited by Business Insider disagree. An academic and antitrust lawyer were both of the view that this was a political announcement, the government keen to be seen to be doing something, even if it’s unlikely to lead to much.
“There is enormous political pressure on the agencies in Washington to be seen as doing something about big tech,” said Daniel Crane, a professor at the University of Michigan’s law school who focuses on antitrust issues. He continued: “This is their way of responding to the political pressure” […]
The announcement was an unusually public performance by a federal regulator which typically prizes confidentiality in such matters. That’s because it was basically a notice, intended particularly to a key figure in Congress, that the Justice Department will now be spearheading the antitrust investigations into the big tech companies, said David Balto, an antitrust lawyer in Washington D.C. with decades of experience working for and with competition regulators officials there.
For tech companies, the DOJs’ announcement was if anything, a subtle indication that the government may not come down as hard on them as it might seem, he said.
Indeed, Balto went further.
This is good news for the companies.
That’s because it’s the Justice Department, not the FTC, laying claim to the issue.
Having the Justice Department take point on antitrust review is actually a good thing for the tech companies, Balto said. The Department of Justice hasn’t filed a major suit under the Sherman Antitrust Act since the Microsoft case two decades ago. And the agency actually has fewer legal options when it comes to policing competition than does the FTC, he said.
“I don’t think anybody’s going to lose any more sleep that this is all with the Justice Department,” Balto said. “If anything, they’ll feel more comfortable in their legal position.”
For his part, Professor Crane says it doesn’t much matter which agency takes the lead — nothing much is likely to come of any antitrust investigation.
The courts have made it difficult for regulators to win antitrust cases, and even when such cases are successful, they tend to take many years to play out. Because of that, there’s little chance the big tech companies will be broken up anytime soon, despite the political pressure on them, he said.
“The kind of blockbuster, ‘let’s break them up’ case that is being trumpeted politically, I just don’t see that being in the offing,” Crane said.
Apple recently testified to Congress on the issue. The DOJ investigation isn’t the only antitrust battle facing Apple: iOS developers have filed a class-action suit over App Store practices; the Supreme Court gave the go-ahead for another one by customers; and the European Union is investigating an antitrust complaint made by Spotify.
FTC: We use income earning auto affiliate links. More.
According to lawsuit filed in California, OpenAI used personal information including medical records, data on children and even accessed private conversations to train its AI models.
Not just ChatGPT, other tools such as Dall-E, Codex and Whisper were trained using data that was extracted in violation of privacy and security of real people.
ChatGPT responds to questions like a human being, writes essays like real people by emulating their experiences and even generates content as if it were penned by a historic figure. All of this comes from data that it has access to, and now its creator OpenAI has been accused of stealing personal information of real people, as per the lawsuit.
What does the lawsuit say?
The petitioners have remained anonymous since only their initials are mentioned in the 157-page lawsuit, but they have accused ChatGPT of posing a catastrophic risk. They have alleged that all that personally identifiable information was stolen from millions of people, to train the AI into being more human-like.
Basically OpenAI is accused of simply harvesting and using any piece of personal information that users provide on other platforms, without seeking consent or even approaching any individual. This means that ChatGPT and Dall-E are essentially generating profits based on the private lives of people who aren’t even aware of that.
The plaintiffs also mentioned that without the massive data pile, extracted unethically, OpenAI wouldn’t have been able to create generative AI that is bringing in billions in revenue. Physical location, chats, contact information, search history and even information from browsers had been taken without the knowledge of the users.
What do the plaintiffs demand?
According to the lawsuit, things get worse since OpenAI introduced its products to the market without even deploying the necessary safeguards to protect private data.
It calls for OpenAI to be transparent about its data collection methods, a compensation for the stolen information and an option for people to opt out of its data harvesting drive.
What is OpenAI’s track record on data privacy?
Before this reports have emerged that OpenAI also used data from YouTube, run by its rival Google, in order to train ChatGPT and other generative AI tools. The reports claimed that ChatGPT had secretly used YouTube since it is the single largest source of images, text transcripts and audio.
The allegations had come months after Google itself was accused of using data from ChatGPT to train its own AI bot called Bard.
ChatGPT had also been banned in Italy over data privacy concerns, as the government sought to prevent it from using the personal details of millions of citizens. But the ban was lifted months later, after Italian regulators were satisfied with the safeeguards that OpenAI had put in place.
But that wasn’t the end for OpenAI’s troubles, since Japan also issued a warning to the firm over data privacy concerns related to ChatGPT.
As for the lawsuit, OpenAI only states that it will collect email, payment information and name of its users whenever necessary. But the firm has never mentioned anything about the data sourced from other corners of the internet to train its model in the first place.
Infor has been sued by a customer who claims an ERP (enterprise resource planning) project that was supposed to take six months instead allegedly dragged on for well over a year without any useful results.
Buckley Powder, a Colorado company that offers explosives and other products for mining and construction firms, entered an agreement with Infor in December 2011, according to the suit. Under its terms, Infor was supposed to install its SX.e software at Buckley, which had been using another Infor system called TakeStock. Both applications handle processes related to wholesale distribution.
The work was to take no longer than 180 days, but now 18 months has passed with no working system in place, Buckley said in its suit, which was filed last week in U.S. District Court for the District of Colorado.
Instead, “the project has been plagued by setbacks and non-performance on behalf of Infor and no reasonable solution has been presented,” the suit states.
After Buckley notified Infor that it planned to terminate the parties’ agreement, Infor “made a final attempt to provide yet another failed attempt at performance of its obligations by submitted a wholly unreasonable and unacceptable implementation schedule,” and Buckley rejected the proposal, the suit adds.
Buckley has paid Infor more than $185,000 and is seeking the return of that money as well as interest, attorney’s fees and other costs.
Infor is one of the industry’s largest ERP vendors after SAP and Oracle, having grown through a long series of acquisitions. In recent years, it has tried to entice customers running older systems to migrate to newer ones though its Flex upgrade program, which offers rapid implementation services and calls for minimal or no additional software license fees. It wasn’t immediately clear on Tuesday whether Buckley’s project was done through the Flex program, however.
It’s also unclear exactly what led to the project’s alleged problems. The ERP project industry in general is plagued by delays and cost overruns, resulting in many other lawsuits similar to Buckley’s.
Systems integrators have come under fire in these cases for allegedly providing inexperienced consultants to work on projects, resulting in disarray and missed deadlines. But customers have also borne their share of blame, such as for failing to provide proper requirements and not making the right people available to work on the implementation.
As for Buckley’s suit, “I don’t think we have enough details from either [side],” said analyst Ray Wang, CEO of Constellation Research.
There are a few possibilities, such as a lack of available staffing on either Infor or Buckley’s part, he speculated. In addition, there may have been delays in procuring the necessary hardware for the upgrade, although “18 months is a long time,” Wang said.
Buckley’s lawsuit “is vague and does not specify what Infor did wrong aside from not implementing the system on time,” added analyst Michael Krigsman of consulting firm Asuret, via email. While Buckley’s claim could have merit, “the absence of detail suggests either ignorance of the details or an attempt to evade responsibility by placing all blame on the vendor,” Krigsman added.
Update the detailed information about Google Responds To Us Doj’s Antitrust Lawsuit on the Katfastfood.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!