Guest Blogger Hannah H.
Businesses today face a dual challenge of adopting artificial intelligence (AI) while also ensuring sustainable practices. The AI market is booming, having grown by nearly $50 billion from 2023 to 2024 (that’s right, billion).
It now stands at $184 billion and is expected to surpass $826 billion in 2030. As AI rapidly evolves, sustainability is a global priority. So, how do we balance the growth of AI with the rising concerns of sustainability?
This article explores how you can incorporate AI sustainable practices within the Environmental, Social, and Governance (ESG) criteria to drive meaningful change. From optimizing algorithms and utilizing efficient data centers to ensuring unbiased and fair practices, AI can play a vital role in all aspects of sustainability efforts.
As consumers are increasingly demanding sustainable practices, businesses are turning to sustainable AI for innovation. Sustainable AI refers to the development and use of artificial intelligence in ways that minimize environmental impact. We can also look at sustainable AI in terms of social equity and transparent governance. Businesses have a corporate responsibility to uphold ESG criteria and AI is already impacting each aspect in unique and transformative ways.
First coined by the United Nations Global Compact, ESG is a set of metrics for companies to measure their social and environmental impact. It has become increasingly important for organizations to provide strong ESG rankings in today’s competitive and climate conscious world. ESG’s are also a great way to measure how AI is impacting sustainability movements.
Let’s take a closer look at AI’s role in the environmental, social, and governance categories:
Energy consumption of AI is an increasing concern for businesses. In 2023, Google’s global greenhouse gas emissions were nearly 50% higher than 2019, which can largely be tied to its data centers. The International Energy Agency (IEA) projects that global electricity demand from AI will increase by nearly 75% from 2022 to 2026.
The rise in data centers has increased energy demands. The equipment and machines inside the data center require cooling, which, in turn, needs water. This drives electrical demand and raises energy costs.
Energy-efficient AI practices, also known as “Green AI”, are important strategies for businesses to reduce their environmental footprint. This can include optimizing algorithms and using energy-saving hardware.
For example, DeepMind, Google’s AI subsidiary, has developed methods with machine learning and predictive outputs to reduce energy usage, particularly in cooling their data centers. This has resulted in reducing cooling energy consumption up to 40%.
Google also adapts its cooling strategies based on local conditions to manage energy and water consumption. While water cooling uses about 10% less energy than air-cooling, it can be more problematic in regions facing water scarcity. For example, in Arizona, Google opts for air-cooling to mitigate the impact on local water resources.
Another sustainable AI example involves the retail giant, Walmart. Together with AI, Walmart helps their employees minimize food and fashion waste.
Walmart employees can scan fruit and vegetables to assess their freshness, with AI suggesting actions like lowering prices, returning items to vendors, or donating them. Similarly, AI helps manage seasonal clothing inventory, analyzing demand to make informed decisions and reduce fashion waste.
Ethical AI is another consideration when thinking about sustainability. Many of us have probably wondered about AI's potential to replace jobs. According to the World Economic Forum, AI could disrupt the job market on a scale similar to the automation revolution of the 1950s.
To balance this impact, companies are combining AI-driven automation with human oversight. For example, Siemens leverages AI, IoT, and data analytics to streamline production, while also keeping their employees at the front line of operations. This allows AI to handle more repetitive tasks, while humans focus on final inspections, problem-solving, and more creative work.
Companies can also use AI for ethical frameworks during the hiring process. It can support eliminating biases to ensure fair hiring and promote social equality in the workforce. Paired with diverse training and a human-centered approach, ethical AI during hiring can foster a more inclusive culture that values different perspectives and experiences.
Additionally, AI has made waves in the healthcare industry, especially for patients in underserved communities. Take the Butterfly iQ+, a portable ultrasound device powered by AI. The Butterfly Network created this device to create high-quality ultrasound images from users’ smartphones.
The Butterfly iQ+ has been used in remote areas with limited infrastructure, helping to overcome barriers to healthcare access. The company also trains healthcare workers to use the device effectively, enhancing social good through AI and expanding access to medical care where it is needed most.
In the public sector, AI can enhance decision-making processes by promoting transparency and accountability. Explainable AI plays a large role in this context, as it bridges the gap between complex algorithms and human users. It helps build trust by clarifying how AI conclusions are reached and identifying any mistakes to prevent future ones.
Additionally, establishing an AI ethics committee is another step many organizations take to establish corporate responsibility and oversee any AI projects. An AI ethics committee can review all AI applications and assess their potential social impacts. This can be balanced with community involvement and engagement to ensure that AI applications reflect diverse perspectives and needs, and help to minimize any biases in data sets.
Overall, a company can use AI to monitor and report on its entire ESG performance. After aligning AI practices with the ESG principles, advanced algorithms can analyze data to provide insights into a company’s environmental impacts, social inclusivity, and governance practices.
The rise of AI music platforms has sparked significant controversy in the music industry. Many artists are concerned about the unauthorized use of their voices and have retaliated against these platforms.
Back in 2020, AI-generated music platforms began using Jay-Z’s voice to create new renditions of popular songs and literature. They included his voice rapping to Billy Joel’s “We Didn’t Start the Fire” and Hamlet’s “To be or not to be” monologue. Though remarkably convincing, these unauthorized recreations were quickly removed following legal action from Jay-Z's team.
A more recent controversy occurred in 2023 with the AI-generated track “Heart of My Sleeve”, which mimicked the voices of Drake and The Weeknd. The song gained millions of views on TikTok and other platforms before being removed due to copyright claims.
In response to these growing AI concerns, over 200 artists, including Billie Eilish, Nicki Minaj, and Katy Perry, signed an open letter warning against the “predatory use of AI” in music. They argued that their voices are unique intellectual property and should be protected. The artists also expressed concerns about AI music generation devaluing human creativity.
As AI-generated music platforms continue to grow, legal teams are trying to keep pace. Existing copyright laws, designed for human-created content, face new challenges in addressing AI training data, voice replication, and creative ownership. Three significant legal developments highlight the evolving response to these challenges.
This case occurred back in 1988 and set precedent for protecting an individual’s distinctive voice. Actress Bette Midler successfully sued Ford for another singer imitating her voice in a commercial. While the case helped establish voice rights as protected property, its application to AI-generated content remains complex. Current copyright laws don't ban the use of music as a plug-in to AI or the use of voices that are already similar to an existing sound or singer.
In 2024, the Recording Industry Association of America (RIAA) coordinated a lawsuit against the two AI music platforms previously mentioned, Suno and Udio. The RIAA alleged that these two startups are using unauthorized, copyrighted recordings in their training data. The platforms defend their practices under the "fair use" doctrine of US copyright law, but the ongoing case is bound to set important precedents for how AI companies can legally train their models on existing music.
Tennessee's Ensuring Likeness Voice and Image Security (ELVIS) Act represents one of the first state-level responses to AI voice replication. Signed into law in March 2024, it provides comprehensive protection for artists' voices and likenesses against unauthorized AI use, including deepfakes and voice cloning. The act serves as a potential model for future legislation in other states.
While AI tools offer unprecedented opportunities for musical exploration, they also challenge our traditional understanding of authenticity and artistic ownership.
As we look to the future, the question is not whether we have to choose between human creativity and AI, but instead, how we can find ways for them to complement each other. Ultimately, the key lies in striking a balance between embracing AI’s innovative capabilities and safeguarding artists’ rights to their unique creativity.
Contact us to see why the brightest companies trust Lithios.
Get in touch