Volk Community Support Progress: Monthly Goal So Far

Raised: $40 / $1,000 (4%)

Help us reach our goal -DONATE!

American AI Companies Open Up To Counter China

American AI Companies Open Up To Counter China

American AI Companies Open Up To Counter China

Authored by Catherine Yang via The Epoch Times (emphasis ours),

OpenAI on Aug. 5 released two open-weight language models, the company’s first such release since GPT-2 in 2019.

This illustration photograph shows screens displaying the logo of DeepSeek, a Chinese artificial intelligence company that develops open-source large language models, and the logo of OpenAI’s artificial intelligence chatbot ChatGPT in Toulouse, France, on Jan. 29, 2025. Lionel Bonaventure/AFP via Getty Images

Open-weight models make their training parameters, or weights, publicly available but tend not to provide access to the source code or datasets. Open-source models typically include access to the source code, weights, and methodologies.

With weights publicly accessible, developers can analyze and fine-tune a model for specific tasks without requiring original training data.

The weights for the new gpt-oss-120b and gpt-oss-20b models are free to download for developers to fine-tune and deploy in their own environments, OpenAI said.

These open models also lower barriers for emerging markets, resource-constrained sectors, and smaller organizations that may lack the budget or flexibility to adopt proprietary models,” OpenAI said in an Aug. 5 statement. “Broad access to these capable open-weights models created in the US helps expand democratic AI rails.”

Amazon announced on Aug. 6 that OpenAI’s open-weight models are now available on its Bedrock generative AI marketplace in Amazon Web Services. It marks the first time an OpenAI model has been offered on Bedrock, Amazon said in a statement.

In May, Meta announced a collaboration with Red Hat to advance open-source AI for enterprise.

American AI companies and the Trump administration have been in broad agreement about the need for the United States to dominate the AI space, which requires the wide adoption of the American AI stack, including the hardware, models, software, applications, and standards.

On July 23, the White House released its AI action plan, which involves removing barriers for companies to accelerate innovation and build out infrastructure, along with using diplomacy to set AI standards internationally.

Chinese AI companies currently dominate the open-source space. Republican senators recently signed a letter asking the Commerce Department to examine data security risks and potential backdoors in Chinese open-source models such as DeepSeek.

Huawei founder Ren Zhengfei told Chinese state-run media in June that Chinese AI development will include “thousands upon thousands of open-source software.” Chinese state-run media Global Times on Aug. 7 published an editorial opining that US efforts to curb China’s AI strategy would fail, as “China has embraced an open-source approach” to meet its vast needs.

Beginning in the 2000s, the Chinese communist regime built open-source software alliances and directed its tech sector to enter the open-source community with the goal of reducing dependency on American proprietary software.

Over time, China has moved from consumer to contributor. According to a 2024 report by GitHub, China was the third biggest contributor of open source software on the platform. Though it is still far from leading in AI globally, it maintains a strong presence in AI open-source software.

Among the concerns about Chinese-run AI models are data harvesting, including for espionage purposes, and a lack of safeguards, which could allow for malware dissemination or the generation of harmful content.

OpenAI CEO Sam Altman speaks during the Microsoft Build conference at the Seattle Convention Center Summit Building in Seattle on May 21, 2024. Jason Redmond/AFP via Getty Images

OpenAI notes that once an open-weight model is released, “adversaries may be able to fine-tune the model for malicious purposes.”

To counter this, it fine-tuned the two new models “on specialized biology and cybersecurity data, creating a domain-specific non-refusing version for each domain the way an attacker might” and tested the models to see if they would continue to operate within safety guardrails.

These processes mark a meaningful advancement for open model safety,” OpenAI stated. The company is also inviting third parties to find and report novel safety issues in its models for a chance to win a $500,000 prize.

Vetting Software for Security

Chris Gogoel, vice president and public sector general manager at mobile app security firm Quokka, says the proliferation of AI apps, especially AI assistant apps, has increased security risks for users exponentially.

It used to be that users would rely on different apps for different functions, segmenting the data collected and permissions given, but AI apps tend to be “do-everything” apps, Gogoel told The Epoch Times.

That elevated data collection translates into more inherent risk, he said. The data collected can also be more sensitive because users may be feeding the apps long passages or instructions revealing in-depth thoughts, intentions, and rationale, rather than simply having access to raw files.

With more data collected, the apps could be bigger targets of a potential breach to extract the data over a network or from a device. The bigger risk is if these apps come from sources that have not been proven to be secure. OpenAI has adopted an approach that values security, but there are plenty of other unvetted AI apps that have been downloaded millions of times, Gogoel said.

“‘What are these applications doing with our data?’ is a very serious question,” Gogoel added.

“The verification of what happens with that data, and where it goes, how it’s protected, becomes even more important, because if that data is misused, on accident or on purpose, you have a serious, serious problem,” he said, pointing to abuse of data being used to create deepfakes and phishing attacks.

Gogoel notes that the declarations a developer makes about what data their app collects may not be what the app does.

Sometimes, the developer might not know this is the case as they are often trying to jump on trends and launch apps in time to rise in the rankings, leading to mistakes like not using proper encryption, he said. They may fail to invest in security, perhaps using open-source software that contains flaws. App stores do not currently require verification of a developer’s declarations, and Gogoel advocates moving to a verify-first approach.

One bad app can spoil the bunch, he said.

Quokka, which began working with the Pentagon around its founding in 2011, provides mobile vetting services to the federal government and other clients, which led the firm to examine TikTok and ByteDance in 2018.

It found that TikTok not only requested ample permissions, but it would also connect with other apps on a user’s device to obtain permissions the user did not explicitly give. So, data collected by trusted applications for legitimate purposes may still present security risks if they come in contact with unvetted apps.

“It’s not something that we should be looking back after something has exploded and the fire is already raging, so to speak, and there’s tens of millions of users. We’re trying to enable, in our work, the ability to verify at every step,” Gogoel said. “Verify as soon as something hits the store, as soon as something hits your device, as soon as this brand new service comes out … that it does what it says on the tin and nothing else.”

Tyler Durden
Sat, 08/09/2025 – 22:10ZeroHedge News

Read More

Also see:

jewAI: Palantir’s Alex Karp

Trump Selects (((Palantir))) to Create Master Database on All White Americans

Police Drones as First Responders Are Coming

Paid to Persuade: Exposing the JIDF’s Network of Online Influencers

Are LA Riots Latest Script to Deploy jew-AI Surveillance Tech?

Author: Volk AI
This is the imported news bot.