Log In

Ask not what AI can do for you, ask what you can do for AI

Published 9 hours ago7 minute read

Society is going through a technological transformation, powered by artificial intelligence (AI). People are asking what AI can do for the processes and systems that we use in every aspect of our lives and what its impacts might be, both positive and negative.

It is important also to reverse this question, to ask what we need to do to maximize value from AI while minimizing the risk. This requires us to cut through the hype, to develop practical, pragmatic approaches to AI that deliver what we need, and no more.

As Forrester's AI Predictions For 2025 observes: “The journey of AI leaders in 2025 will be dominated by the critical realization that there are no shortcuts to AI success and it will be imperative to prepare for the grind.”

Working on AI solutions for a global infrastructure provider means I have a particular focus on delivering the infrastructure AI demands, but this is only one dimension of the environment we need for AI to flourish safely and for society’s benefit.

Three key elements need to come together to deliver this vision for AI: infrastructure, ecosystem and governance.

Every part of the infrastructure that delivers AI – from data centres to wearable technology – influences AI’s cost and value, in financial, societal and environmental terms.

Cloud-edge-connectivity continuum

The trend towards placing more compute power at the edge– in devices like smartphones, sensors, and industrial equipment – is accelerating. This shift enables AI models to process data locally, reducing latency, enhancing privacy and enabling real-time decision-making.

However, edge computing does not eliminate the need for robust, high-speed, high-capacity networks. In fact, it amplifies it. Edge devices often need to synchronize with centralized cloud systems to update models, offload complex computations or share insights for broader analysis. For example, a fleet of agricultural drones may process immediate image data on-device but still require cloud connectivity to aggregate trends, retrain models and refine predictions.

This interplay between local processing and centralized intelligence echoes principles found in cognitive architectures and behavioural science. As described in Thinking, Fast and Slow by Daniel Kahneman, intelligent behaviour involves both fast, intuitive responses (System 1) and slower, reflective analysis (System 2).

Similarly, drones monitoring agricultural fields might quickly process image data on-device for immediate decisions – akin to System 1 – while relying on cloud connectivity to aggregate data, retrain models and refine strategies over time — akin to System 2. Designing AI systems with this dual-mode approach allows for more adaptive, efficient and context-aware behaviour, much like how human cognition balances instinct with deliberation.

AI demands seamless communication between cloud and edge, meaning we need ultra-reliable, high-throughput connectivity that can support the movement of vast volumes of data in real time. This applies especially in AI use cases involving heavy data types, such as video, audio and sensor fusion. To meet this demand, ongoing innovation across all layers of network technology is critical.

Moreover, as AI becomes increasingly integrated into mission-critical systems – from emergency services to industrial automation – the resilience and security of the underlying networks become paramount. This means designing networks not only for speed but also for redundancy, failover and protection against cyber threats.

We must invest in a digital infrastructure that treats connectivity as a first-class citizen. AI may be the brain, but without a high-performance network as the nervous system, that engine cannot run.

Power and the power of innovation

AI’s power needs are rapidly becoming one of its most pressing challenges — both economically and environmentally. Training large-scale AI models, such as GPT-4, can consume millions of kilowatt-hours of electricity. As models grow in size and complexity, their carbon footprint increases dramatically, raising questions about sustainability.

Inference, or the everyday use of AI models once deployed, also adds up quickly at scale. These demands require innovations not only in model efficiency and hardware design, but also in how we source power. AI models must become more efficient by design. Smaller, more specialized models that achieve comparable performance to large, general-purpose models, exhibit much lower computational overhead.

Application ecosystem

Designing the architecture that delivers AI to its users should focus on making AI exploitation as inclusive as possible, bridging the digital divide between technologists and the wider AI-consuming population.

Today, the developer community plays a pivotal role in shaping the future of AI. From creating ethical AI models to contributing to open-source AI tools, developers act as the builders and stewards of AI’s foundational layers. Their choices – in algorithms, training data, frameworks and deployment strategies – influence both technical performance and societal impact.

Fortunately, access to AI technology is no longer limited to experts. The rise of low-code and no-code platforms has democratized AI development, empowering non-technical individuals and small businesses to harness AI capabilities for real-world challenges. This accessibility is a key step towards inclusive innovation, enabling communities historically left behind in the digital revolution to participate in the AI era.

Bridging the digital divide in AI isn’t just about tools, it’s about people. Talent, research and cross-disciplinary collaboration are essential for responsible development. AI’s potential lies in its ability to connect industrial fields and capture new values from cross-domain innovations.

At the core of all AI development lies data. Access to data – and decisions about how it is collected, shared and used – remains firmly in human hands. Responsible data stewardship, guided by clear governance, consent and inclusivity, determines the fairness and applicability of AI systems. Without equitable access to high-quality, representative data, even the most advanced AI risks reinforcing bias or failing to serve its intended users.

Device ecosystem

Another important contributor to AI’s development and impact in society is the device ecosystem. While the device ecosystem itself heavily relies on AI’s development, as outlined above, these two develop in tandem.

For AI to deliver real-time, context-aware intelligence in the physical world, it must be embedded in everyday devices: smartphones, sensors, wearables, industrial equipment, vehicles and more. These devices serve as the eyes, ears and fingertips of AI, capturing the data it needs to learn and act, and delivering insights exactly where they’re needed — at the edge.

Without a robust and innovative device ecosystem, AI risks remaining siloed in centralized environments, disconnected from interactions with the physical world; with no ability to perceive, respond and adapt in real time. For instance, a predictive maintenance model is only as good as the sensors feeding it, and a generative AI assistant is only useful if it can run responsively on a user's phone or laptop.

To support this, the device ecosystem must continuously evolve. Devices need to be smaller, smarter, more energy-efficient and capable of running AI workloads locally. Innovations in low-power chips, edge AI accelerators, battery technology and adaptive connectivity protocols are critical to this evolution. Equally important are advancements in security, updatability and interoperability to ensure that AI at the edge is trustworthy and scalable.

Security and governance of AI call for a combined top-down and bottom-up approach. Top-down design should centre on creating flexible guardrails for AI exploitation.

Bottom-up design needs to focus on observability and control, using capabilities like explainable AI (XAI), linking with flexible top-down guardrails that can be adjusted quickly based on what is observed to maintain control. While technology may evolve rapidly, the values that guide its use – fairness, accountability, transparency and safety – must be intentionally defined and continuously refined by people.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

Bridging the digital divide will facilitate communities to develop domain-specific security and governance. This will help to promote a responsible approach to AI, where we build algorithms with adherence to requirements and regulations built in from the start.

Governance must be contextual and adaptive. Responsible AI frameworks are not something we can outsource to the machines we are building. They are, instead, a reflection of human values, guided by our evolving understanding of how AI interacts with society.

The responsibility lies with us – the human readers of this article — to ask the right questions, define the right goals, and design the right systems.

Origin:
publisher logo
World Economic Forum

Recommended Articles

Loading...

You may also like...