AI for Sustainability & Behaviour Change: Opportunities and Risks for Brands

Hidden Layers, Unknown Outcomes.

This article frames some of the content I shared over the hour-long session at Brand Led Culture Change, as many participants came up to me afterward and said I covered a lot of ground too quickly. These are not my fellow panelist's (Murat Sönmez of pulsESG and Melissa Anderson of Public Good) perspectives, so bear in mind I do not have their expertise to counterweight my inputs as I did in real time on stage. The conversation was moderated by Kristen Norman, and I am grateful for her mindful questions without which this dialogue would have been a runway train between three enthusiastic entrepreneurs. This article comprehensively surmises everything I brought up but did not have the time to delve into both on stage and in conversations after our session ended.

1. Will an enterprise that fails to integrate AI/ML meaningfully get left behind? How can AI be harnessed by a brand to meets its sustainability goals?

In a world where most companies build their bottom-lines on data, not integrating artificial intelligence and machine learning into a brand's footprint will chisel away its competitive edge and market share over time. AI/ML has so many extraordinary uses that to avoid it altogether would result in a massive missed opportunity for any enterprise. Yet, most enterprises fail to discern where to apply AI solutions to see positive impacts on their bottom-line.

Here's a quick list of things AI can do for your enterprise in general:

  • Sift through vast troves of customer data to create personalised product recommendations, customisations and experiences.

  • Analyse big data expediently and accurately to model trends, map opportunities, and make informed decisions.

  • Depersonalise information submitted/ anonymise data collected yet make it relevant and accessible for specific use cases.

  • Automate various marketing tasks such as email outreach, social media management, & ad targeting.

  • Tailoring recommendations and communications to suit consumer preferences.

  • Offer valuable insights into consumer behaviour & buying habits and build personal profiles based on preferences.

  • Use predictive analytics to reach people intimately and with familiarity at scale.

  • Cohesively sequence and simulate relationships between scattered inputs through pattern recognition, making otherwise inaccessible content readily available for relevant use cases.

  • Identify and target specific demographics, to reach, enabling brands to design more effective campaigns that reach consumers directly.

  • Reduce the need for middlemen, making tasks less error prone and more resource efficient and cost effective.

  • Qualify and quantify consumer responses, suggestions, and comments across channels, to help brands improve their product and service offerings. This bolsters brand perception and equity.

  • Can decrease menial administrative labor, reduce time spent paper pushing, redirect talent to harness their passion and expertise inimitably.

AI-powered chatbots can provide customers with round-the-clock assistance, instant support, prompt answers to frequently raised queries and walk them through quick pay options. It can be trained to bring in human agents only when the AI's scope of responses have been exhausted and the consumer has escalated the situation for resolution. This would make customer service feel consistent, reliable, clear and more efficient at ensuring dispute resolutions. Such use cases of AI also streamlines claims-processing times and dispute resolution turn arounds for insurance firms.

Supply-Chain disruptions during the pandemic has made evident how traditional optimisation approaches fail to solve for rising uncertainty and volatility in supply or demand. AI scheduling agents can coordinate and streamline complex assembly lines, maximise production throughput while minimising changeover costs, to ensure on-time delivery of products to customers. The manufacturing sector could rely on AI scheduling agents to reduce yield losses by 20-40% as they are built to prioritise time, resource and cost efficiency.

Few technologies have seen the rate of change and exponential growth that AI has seen. Its popularity and rate of adoption can be attributed to the convenience it affords us, which is unparalleled. This convenience will not come at grave cost to society and the planet, if each AI solution is reined in and regulated diligently before and once it has been deployed.

Here are 5 ways AI can enhance your brand's sustainability initiatives:

  1. It can examine a company's supply chain data to discern ways to improve transportation efficiency and reduce waste, while making more socially and ecologically responsible decisions around sourcing, manufacturing, packaging, and shipping.

  2. Its algorithms can scrutinise data from sensors and energy consumption logs to plot areas where energy usage can be optimised. This can help reduce an organisation's carbon footprint.

  3. It can identify and promote the products and services in a company's inventory that are most environmentally and socially viable to consumers. Prioritising their purchase above less sustainable options.

  4. It can take inventory of a company's energy usage and endorse ways to incorporate more renewable energy sources into the enterprise's energy mix.

  5. Computerised vision and AI powered machines can segregate, appraise, and recycle waste materials faster, and more accurately than humans, making it easier to manage, monetise and repurpose waste, thus diverting what would normally end up in landfill towards a thriving circular economy.

2. Should brands embrace a reactive or proactive approach to sustainability and AI Integration?

A reactive approach to sustainability would involve responding to environmental or social concerns as they arise, rather than implementing preemptive measures. This approach is passive as it resorts to do the bare minimum, like using environmentally friendly materials or reducing carbon emissions in response to public demand or regulations.

A reactive approach to AI integration would entail deploying analytics to detect and limit fraud and to build systems that prevent theft. Defensive data strategies are best used to minimise or mitigate risk, discern vulnerabilities, and implement measures to alleviate them, promote compliance and regulatory frameworks, and ensure security (firewalls, intrusion detection systems, and data encryption.) A defensive strategy is about control, as it focuses on identifying and preventing negative outcomes and potential threats or attacks, but it does not formulate an alternative path where they would not even occur.

On the other hand, a proactive approach to sustainability and AI integration could involve taking preemptive measures to integrate sustainability and AI ethics across all operations of a brand. This approach advocates opting for the best path not the better path, which compares itself to and is thus limited by what has preceded it. A proactive approach is opportunistic, and pioneers innovation.

For example, AI technologies like generative design can help brands create products that are more sustainable from the outset. Generative design uses algorithms to generate thousands of design options, each with different configurations, materials, and manufacturing processes. From these options, a design that is more environmentally friendly can be selected based on factors that are a priority for the brand, such as energy efficiency and use of recycled materials.

AI can also analyse data on suppliers, transportation routes, and freight emissions, to identify areas of the supply chain that are most impactful on the environment. Route optimisation systems enhance an enterprise's logistics which provides significant financial and environmental benefits. This information can then be used to make data-driven decisions about how to reduce emissions and improve sustainability. AI can also reduce the number of defective products churned out which diminishes the number of returns sent back to the brand by consumers each year, effectively reducing GHG emissions.

AI-enabled computer vision can improve worker safety and greater compliance with safety rules. Manufacturing facilities can be outfitted with smart cameras at specific locations to inspect if workers are sporting their safety equipment and following rules. This system can also recognise other potential risks in the facility and alert the appropriate safety or operations manager for further action. AI can send in IOT devices such as drones and robots to inspect or do site visits that would be hazardous or unsafe to a human being, which is of immense benefit to insurance companies.

Brands can also use AI to nudge customers towards more sustainable behaviours. For example, an e-commerce platform may use an AI engine to recommend products that are more environmentally friendly and have a lower carbon footprint or produced from natural fibres and dyes. These small nudges can add up over time, resulting in more sustainable behaviour overall. 

A proactive approach can also imply an offensive AI strategy, which implies maintaining the leading edge by identifying and capitalising on opportunities as well as weaknesses before competitors can. This could be through tactics such as data mining, social engineering, and targeted attacks. Offensive AI is often used in cybersecurity by ethical hackers to test the security of a company's systems. Both defensive and offensive AI strategies are essential for maintaining a strong security posture and preventing cyberattacks. It's important to use a combination of these strategies to ensure comprehensive protection against potential threats.

To weight the conversation with my perspective, I believe every effort benefits from a proactive approach, particularly when it pertains to sustainability and AI integration. Sustainability initiatives' success states are conveyed in impact relevant metrics. With AI integration success is assessed through the AI Maturity Scale which serves as a way to gauge how intuitively aligned your AI solution's key capabilities are with your company's purpose, business strategy, operational goals, talent, culture, organisational structure, stakeholders, and customers.

3. What are the key factors that a brand should consider when integrating AI to be ethical and sustainable?

Here's a checklist of things, in no specific order, to bear in mind before you elect to integrate AI into your effort:

  • Do you have a clear definition of the problem the business needs to solve that is best solved for by AI?

  • Clarified intention and an execution strategy that exemplifies said integrity.

  • Take time to understand the long-term impacts of your AI's use case.

  • Discern if, when and how it should be implemented to reduce error rates and augment human expertise and knowledge in technical areas.

  • Align the AI with your key strategic goals and KPIs

  • Keep it simple, to keep it scalable. Simplicity invites less resistance as it is easier to comprehend, and implement.

  • Up-skill your employees so they do not feel threatened by the AI integration, rather well equipped to grow with the technology's scope of expression.

  • Make the team working on the AI Integration interdisciplinary and diverse to avoid data myopia, and bias.

  • Create an iterative AI handbook of policy guidelines that evolves with the technology and its scope of expression within the enterprise.

  • Reduce polarising effects, e.g., "Winner Takes All" economic models, social media tribalism, biases in training data sets, job losses to automation etc.

  • Cost of recovery. A recent survey by McKinsey qualified that AI could improve business efficiency by up to 40% and reduce operational costs by up to 30%.

  • Accountability: When an AI model you trust and defer to results in loss, liabilities, lawsuits, injury or worse death, who is responsible?

  • Transparency: AI's Deep Learning modules have multiple hidden layers and require interpretability before incorporation into the daily flow of an institution or enterprise. In order for AI to remain ethical it must offer transparency, i.e. allow us to perceive how it reached a specific conclusion. Enterprises must have visibility into: (i) criteria and methods of data selection (ii) training data sets (iii) system's capacity for explainability (iv) bias blind-spots in the training data sets (v) model version updates/upgrades

  • Unorganised big data: How much of it is unstructured data versus structured records? Companies sitting on mountains of data that have yet to be combed through, inferred from and made useful could opt for an AI model to make their big data into little insights that make a vast difference. What are the benefits of investing into knowledge discovery infrastructure for your firm?

  • Product Design efficiency: Does your product engineering and production team need to be streamlined to reduce waste? AI could show you how to reuse historical parts, optimise design elements and interdependencies, support preproduction planning and enhance existing work load.

  • Enhancing product performance: Do your products' performance evaluations hinge on multiple metrics for e.g. in the automotive industry speed and fuel economy. If so predictive modelling can evaluate competing attributes of a design and optimally reconcile them, or optimise attributes to work synergistically.

  • Proactive disclosure: Does you company lay out clear guidelines around disclosure when an AI is relating to a human being? Touch points can include but are not limited to assistants, schedulers, HR tutorials, HR assistance, tech support, customer support, client services etc.

  • Due Diligence: Have you laboriously scrutinised every aspect of your AI solution, more so than you would other projects you launch?

4. What are some use cases for AI in the sphere of brand sustainability?

Here are the different use cases that I was inspired by:

Google's DeepMind AI reduced the energy used by their data center by 40% and their cooling bill, this also cut overall energy overhead by 15%.

Winnow Vision uses computerised vision to reduce food waste in kitchens in the hospitality industry.  Winnow has successfully cut food waste at Hilton Tokyo Bay by 30%.

WINT's remarkably accurate system detects leaks that even human staff miss on routine inspections, and prevents both water waste and water based property damage. On average buildings waste 25-30% of total water consumed.

AI paired with satellite imagery, can identify changes in land use, vegetation, forest cover, and the effects of natural disasters. AI provides an interoperable, cross-border solution when entire regions are impacted by extreme environmental events. A region's socio-economic resilience is defined by the inequalities that exacerbate risks and vulnerabilities within its landscape. Technology can prove a great equaliser.

Wildlife Insights sifts through, analyses and categorises camera trap data.

xView2 offers aerial view images of building and infrastructure damage, which its machine learning algorithms then expeditiously categorise in accordance to severity of impact. No existing methodology is as quick as this system.

Ororatech hones in on early detection and real-time monitoring of wildfires

Precision AI uses drones to identify and target deliver the exact quantity of herbicide needed, instead of spraying the whole field. It attests to reducing chemicals sprayed on agricultural produce by 90%.

Blue River Technology uses AI to detect the presence of invasive species and other changes in biodiversity expediently.

Global Fishing Watch is a collaboration between Google, Skytruth and Oceana, to monitor illegal, unreported and unregulated fishing in the world's oceans. AI coupled with satellite data increases transparency and awareness around over-fishing by plotting suspicious areas, highlighting hotspots and training the AI on suspicious boat activity.

5. Are there are any regulatory frameworks for AI, if so have any been effective?

There are a few inter-governmental and or international regulatory bodies:

  1. AI Treaty: (aka AI Convention) The first legally binding international convention on AI, 46 countries are members, but it is slow moving. It has to be individually ratified by each member assimilated and embodied by each member nation's national policies and laws for there to result in tangible impact. Government bureaucracies are not going to be able to move as quickly as AI does. Countries may also be able to opt out of specific moratoriums and stringent rules, which defeats the purpose.

  2. OECD (Organisation for Economic Co-operation and Development) AI Policy Observatory: combines resources from across the OECD and its partners from all stakeholder groups. Since the OECD maintains economic progress and world trade as its priority, many are skeptical that this bias is likely to compromise the integrity of the policies and regulatory frameworks the OECD AI Policy Observatory creates. Additionally their high-level principles are hard to put in practice.

  3. GPAI: (Global Partnership for Artificial Intelligence), convened in Canada by Trudeau. 29 member nations belong to this international initiative. It was established to guide the responsible development and use of artificial intelligence in a manner that respects human rights and shared democratic values. They have not published to keep pace with developments in this space.

  4. EU (European Union)AI Act: Abides by European Law, and covers healthcare, education, facial recognition and generative AI. There are several loopholes and exceptions in the law, which limit the Act’s ability to ensure that AI remains a force for good in an EU citizen's life. The law is inflexible and provides no mechanism to label an emergent nefarious use-case in a different sector than the ones covered by the law, as high risk.

  5. ISO (International Organization for Standardization) has published standards corporations should adhere to when developing risk management, impact assessments and manage the development of AI. Their standards are general and could apply to any industry, which makes it harder to align to a specific sector. People also question if technical experts and engineers should be drafting ethical risks, because that may prove parochial.

  6. UNESCO AI Ethics: this framework has been immensely popular in developing countries which are newer to AI ethics. Two of its principle members China and Russia, that have been excluded from Western AI ethics debates, make people feel skeptical of the entire framework, as both China and Russia have used AI to surveil people. Its voluntary guidelines are also hard to enforce.

Additional resources:

  1. WEF AI C-Suite ToolKit: Is a powerful resource that helps a company's leaders protect both the enterprise and society at large from potential risks and threats posed by AI by helping the company formulate a policy guidelines handbook.

  2. CAIS (Center for AI Safety): Establishes rigorous techniques to build safe and trustworthy AI, that can safely and successfully be adopted into scalable use cases. Dan Hendrycks, the director of CAIS who has authored many relevant papers on AI, had the following to say in his paper entitled, Natural Selection Favors AIs Over Humans, "Competitive pressures among corporations and militaries will give rise to AI agents that automate human roles, deceive others, and gain power. If such agents have intelligence that exceeds that of humans, this could lead to humanity losing control of its future." For those alarmed, the Hendryck's Center offers helpful tools like this Video lesson on Machine Learning Safety

Have any of the frameworks mentioned been effective? Not entirely and for three main reasons: 1. Laws and legal frameworks are going to prove insufficient, slow moving, and inhibitory and we need agile agencies to take quick adaptive actions in a synchronised capacity across the world. We need governments to work expediently in a cooperative manner, but no institutional framework for power apart from a dictatorship is currently capable of taking critical actions in a timely manner, which threatens the wellbeing of a democracy. 2. A lot of these AI systems need to be able to have access to public data sets which governments inhibit access to, which will set us back in regards to collective environmental and social good. For instance the Global Fisheries Watch program benefitted from the Indonesian government and Peruvian Government officially making their vessel monitoring system data public.

While the panel concluded on the premise we do not know what we do not know, I personally believe it is critically important to mind that which we do not know. To not know the consequences in their entirety and to casually choose to deploy AI solutions anyways seems incredibly reckless and terrifying. It shows a fundamental lack of integrity toward collective wellbeing. To shirk our shoulders at what could happen with technology has resulted in enough disasters in the past, for us to not want to repeat history, case in point the atomic bomb.

Asher Jay

Creative Conservationist, National Geographic Explorer

http://www.asherjay.com
Previous
Previous

Failing Impact Acronyms: ESG. CSR. UNPRI & More

Next
Next

Do We all Agree on What a Brand is?