Log In

Google announces AI policy proposal, and fix for Chromecasts.

Published 1 day ago4 minute read
Google announces AI policy proposal, and fix for Chromecasts.

In a flurry of activity, Google has addressed a Chromecast issue and released a comprehensive AI policy proposal. The tech giant is rolling out a fix for “untrusted device” errors that have plagued Chromecast 2nd generation and Chromecast Audio users. Simultaneously, Google has weighed in on the national “AI Action Plan,” advocating for specific stances on copyright, export controls, and AI regulation.

Chromecast Fix

Many Chromecast users encountered “untrusted device” errors that effectively disabled casting. The issue primarily affected Chromecast 2nd generation and Chromecast Audio devices. While Google hasn't specified the root cause, speculation arose on Reddit that an expired certificate within the devices might be to blame. Google has stated, “We have started to roll out a fix for the problem with Chromecast (2nd gen) and Chromecast Audio devices, which will be completed over the next few days. Your device must be connected to receive the update.”

However, a factory reset might not solve the problem. Google acknowledges that users who attempted a factory reset “may still be experiencing an issue where you cannot re-setup your device.” The company assures users that it's actively working on a resolution and advises monitoring the support post for further updates. In the interim, a Reddit user has shared unofficial troubleshooting steps.

Google’s AI Policy Proposal

Mirroring activity from other major players such as OpenAI, Google has published its response to the call for a national “AI Action Plan.” The proposal outlines Google’s positions on critical aspects of AI development and regulation.

Copyright and AI Training: Google advocates for weak copyright restrictions on AI training data, asserting that “fair use and text-and-data mining exceptions” are crucial for AI advancement. Google seeks the right to train its models on publicly available data, including copyrighted material, with minimal restrictions. The company argues that this approach avoids “unpredictable, imbalanced, and lengthy negotiations with data holders.” This stance comes as Google faces lawsuits from data owners alleging the company used their copyrighted data without permission or compensation. The legal landscape surrounding fair use and AI training remains uncertain, with U.S. courts yet to rule definitively on the matter.

Export Controls: Google expresses concern that certain export controls, particularly those limiting the availability of advanced AI chips to specific countries, could undermine U.S. economic competitiveness. Google suggests these controls impose “disproportionate burdens on U.S. cloud service providers.” This view contrasts with competitors like Microsoft, which has stated its confidence in complying with the existing regulations. The export rules include exemptions for trusted businesses requiring large clusters of chips.

Investment in R&D: Google calls for sustained investment in domestic research and development, pushing back against efforts to reduce spending and eliminate grant awards. The company also suggests the government should release datasets beneficial for commercial AI training and allocate funding to early-market R&D while ensuring scientists and institutions have access to computing resources and models.

Federal AI Legislation: Highlighting the challenges posed by the current patchwork of state AI laws (currently 781 pending AI bills in the U.S.), Google urges the passage of federal AI legislation, including a comprehensive privacy and security framework. Google cautions against imposing overly burdensome obligations on AI systems, particularly regarding usage liability. The company argues that model developers often lack control over how their models are used and shouldn't be held responsible for misuse. Google has historically opposed laws that clearly define precautions AI developers should take and establish liability for model-induced harm.

Transparency and Disclosure: Google expresses reservations about overly broad disclosure requirements, similar to those being considered in the EU. Google believes the U.S. government should resist transparency rules that risk divulging trade secrets, allowing competitors to replicate products, or compromising national security. Some regions and countries, including California and the EU, are moving towards mandating greater transparency from AI developers regarding their systems' functionality and training data.

From Zeal News Studio(Terms and Conditions)
Loading...
Loading...

You may also like...