Apple may soon be forced to disclose more about its AI strategies. A motion by the National Legal and Policy Center (NLPC) demands that the company produce a comprehensive report on the ethical risks of its AI development. The background to this is growing concerns about how Apple uses data to train its AI models and whether ethical standards are being adhered to. The motion is to be voted on at the annual general meeting in February 2025. If it is passed, Apple would have to document annually what risks exist and what measures are being taken to minimize them.
Apple has focused heavily on artificial intelligence in recent years and, with "Apple Intelligence," is relying on a close integration of AI with its products. While the company presents itself to the outside world as a pioneer in data protection, it remains unclear how exactly it trains its AI models and which data sources are used. Unlike other tech companies, Apple relies largely on device-based processing to protect user information. But not all processes run locally. Certain AI functions access cloud servers, which raises questions about data security. In addition, Apple has entered into partnerships with companies such as Alphabet (Google) in the past, which are known for controversial data protection practices. Critics fear that Apple does not consistently adhere to its strict data protection standards in practice.
Shareholder proposal calls for comprehensive AI report
The application, known as the “Report on Ethical AI Data Acquisition and Usage” submitted aims to make Apple's handling of external data sources transparent. The main demand is the creation of a report that addresses the following aspects:
- The risks arising from the use of external data sources for training Apple's AI models
- The potential impact on the company, its finances and public safety
- The measures Apple has already taken to minimize these risks
- The methods for verifying the effectiveness of these measures
What is special about this proposal is that Apple would have to submit an updated report every year. This is intended to prevent the company from simply sitting out the discussion about ethical AI development.
The question of data acquisition
A central point of the application is the way in which Apple obtains training data for its AI. The concern is that Apple - like other large tech companies - could resort to methods such as data scraping, i.e. the mass collection of data from the Internet without the consent of the authors. This is not uncommon in the industry. Companies such as OpenAI or Google are repeatedly criticized for using protected content for their AI models. Apple has also been confronted with such allegations. To minimize these risks, the company has tried to pay for access to certain data, for example by buying up archives. Nevertheless, it remains unclear to what extent Apple's AI models use copyrighted content.
Apple's partnerships raise questions
Another argument made by the NLPC is that Apple is indirectly involved in questionable data collection through close cooperation with Alphabet. Alphabet, which owns Google, gains access to large amounts of user data through standard search engine integration on Apple devices. Critics argue that Apple is deliberately allowing indirect monetization of its user base instead of collecting data itself. In addition, Apple at one point considered a partnership with OpenAI. Apple was reportedly offered a seat on the OpenAI board, but the company turned it down - apparently due to antitrust concerns. Since OpenAI has been criticized in the past for non-transparent data collection, Apple's cooperation with the company raises further questions.
Probable reaction from Apple
Apple is expected to speak out against the proposal. In the past, the company has mostly rejected shareholder proposals calling for more detailed reporting. Since many shareholders follow Apple's recommendations, it is unlikely that the proposal will find a majority. Nevertheless, the discussion about Apple's AI ethics could gain momentum. Public pressure to create more transparency about the development and use of artificial intelligence is growing. Apple could be forced to further revise its guidelines to dispel doubts about its AI strategy.
Data protection vs. AI progress: Can Apple combine both goals?
Even if the application is likely to fail, the debate about Apple's AI development is far from over. The company is facing the challenge of reconciling its data protection promises with the growing demands of AI development. While Apple has already implemented measures such as local processing of user data and encrypted access to cloud servers, the question remains as to how transparent the company will actually make its AI strategy. The coming years will show whether Apple really consistently enforces its ethical principles or whether economic interests prevail. (Image: Apple)
- Apple Intelligence: Why the Supercycle is Still Waiting
- Apple sets new priorities: Siri update & AI optimization
- AI and Apple: Why the gap is only apparent
- Health as a priority: Apple's big plan for the future
- Tim Cook: Why Apple is not planning to charge for AI services
- Siri 2.0: How LLM Siri could overtake the competition