Get AI Ready – What IT Leaders Need to Know and Do

Ready your enterprise to capture AI opportunities and bolster your cybersecurity, data and AI policies and principles.

Define your AI ambitions with the AI Opportunity Radar

Download this guide to AI readiness.

By clicking the "Continue" button, you are agreeing to the Gartner Terms of Use and Privacy Policy.

Contact Information

All fields are required.

Company/Organization Information

All fields are required.

Optional

4 key initiatives to get your enterprise AI ready

Whether your organisation’s ambition is for AI to augment everyday processes or create something game-changing, the organisation needs a set of foundational capabilities to succeed.

This guide can help IT leaders ready their organisations to:

  • Define their “AI ambition” and spot AI opportunities

  • Prepare AI cybersecurity

  • Make data AI-ready

  • Adopt AI principles

Download your copy now.

AI ambition must weigh feasibility, opportunity, risk

More than 60% of CIOs say AI is part of their innovation plan, yet under half feel the organisation can manage its risks. Narrow the gap, first by defining your AI ambitions.

Define your AI opportunities, deployment options and risks

GenAI has enabled machines to transition from being tools to being teammates. This is a big shift that comes with a potential dark side. The C-suite expects CIOs to lead the organisation’s AI strategy to capitalise on the benefits of AI while avoiding the risks. 

The stakes are high, given the combination of AI excitement and disillusionment that exists in every organisation (disillusionment because the majority of AI projects have failed to deploy as projected). 

Gartner research finds that between 17% and 25% of organisations said they planned to deploy AI within the next 12 months every year from 2019 to 2024, yet the annual growth of production deployments was only 2% to 5%.

To help increase the success rate, CIOs should start by helping set the organisation’s AI ambition; that is, where and how you will use AI in the organisation. Given that today’s AI can do everything, including decide, take action, discover and generate, it is as important to know what you will not do.

An AI plan must take account of three key elements:

  1. AI opportunity ambition

    This reflects the type of business gains you hope to achieve from AI. Opportunity ambition identifies where you will use AI (e.g., for internal operations or customer-facing activities) and how (e.g., to optimise everyday activities or create game-changing opportunities). Leverage the Gartner AI Opportunity Radar to map your opportunity ambition.

  2. AI deployment

    This reflects the technological options available for deploying AI, which can enable or limit the opportunities you hope to pursue. Organisations can deploy AI from public, off-the-shelf models trained on public data, leverage a public model and data adapted with your proprietary data, or build in-house as a bespoke algorithm trained on your data. The more customisation involved, the higher the investment cost and time to deployment. Yet greater customisation also enables game-changing opportunities.

  3. AI risk

    AI risk comes in many forms, including unreliable or cryptic outputs, intellectual property risks, data privacy concerns and cyber threats. There are also emerging regulatory risks related to the rules and restrictions that different jurisdictions may place on AI, including those related to copyright. Your organisation will need to define its risk appetite as it relates to degrees of automation and degrees of transparency.

Engage the executive team to choose AI opportunities to pursue

AI falls into two high-level categories in the organisation:

  1. Everyday AI enhances productivity by enabling humans to work faster and more efficiently at the things you already do.
  2. Game-changing AI enhances creativity by either enabling you to create results via new products and services, or through new core capabilities. Game-changing AI will disrupt business models and industries.

Both everyday AI and game-changing AI have internal and external uses. Defining your AI ambition involves examining which combinations of everyday and game-changing AI and which internal or external use cases you will pursue.

Investment expectations will influence these decisions, given that game-changing AI is not cheap. Though 73% of CIOs say they plan to invest more in AI in 2024 than they did in 2023, CFOs are sceptical about the results: 67% of finance heads say that digital investments have underperformed in terms of expectations.

To define realistic AI ambitions, consider three AI investment scenarios with your C-suite team:

  1. Defend your position by investing in quick wins that improve specific tasks. Everyday AI tools have a low cost barrier to adoption, but they will not give your organisation a sustainable competitive advantage. Investment here helps you to keep up with the status quo.

  2. Extend your position by investing in tailored and bespoke applications that provide a competitive advantage. These AI investments are more expensive and take more time to deliver an impact, but they are also more valuable.

  3. Upend your position by creating new AI-powered products and business models. These investments are very expensive, risky and time-consuming, but they have enormous reward potential and could disrupt your industry.

Finally, as CIOs engage business executives on their AI opportunity ambition, ensure they have an accurate understanding of feasibility. For example, you cannot capture opportunities without the requisite technology. You also cannot use AI when those who will use it, internally and externally, are not ready for it.

The Gartner AI Opportunity Radar (complete the form above for detail) maps AI ambition in terms of both opportunity and feasibility. 

Note that the biggest opportunities are probably disruptive innovations that could upend an industry and deliver high economic returns, but these are short on feasibility because they involve unproven technology and/or unwilling stakeholders.

 

Understand AI deployment options and trade-offs in speed and differentiation

The past six months have seen a flurry of AI models and tools released into the market. In addition, many large incumbent independent software vendors (ISVs) are embedding AI into their existing applications. Such competitive jostling is characteristic of most high-stake, early-stage markets and makes for a confusing array of choices.

Using GenAI as an example, Gartner sees five emerging approaches for deploying AI:

  1. Consume GenAI embedded in applications, such as using an established design software application, which now include image generation capabilities (e.g., Adobe Firefly).

  2. Embed GenAI APIs in a bespoke application frame so that enterprises can build their own applications and integrate GenAI via foundation model APIs.

  3. Extend GenAI models via data retrieval, for example using retrieval augmented generation (RAG), which enables enterprises to retrieve data from outside a foundation model (often your internal data) and augment prompts with it to improve the accuracy and quality of model response for domain-specific tasks.

  4. Extend GenAI models via fine-tuning of a large, pretrained foundation model with a new dataset to incorporate additional domain knowledge or improve performance on specific tasks. This often results in bespoke models that are exclusive to the organisation.

  5. Build bespoke foundation models from scratch, fully customising them to your own data and business domains.

Each deployment approach comes with trade-offs between benefits and risks. The key factors influencing these trade-offs are:

  • Costs – Embedded applications and embedding model APIs are the least expensive of the AI deployment options. Building a model from scratch would be the most expensive. In between, costs vary widely, especially with fine-tuning, for which costs are high when updating models with billions of parameters.

  • Organisational and domain knowledge – Most AI foundation models are general knowledge models. Improving accuracy requires organisations to bring domain and use case specificity through data retrieval, fine-turning or building your own.

  • Ability to control security and privacy – Security and privacy considerations are currently quite broad with GenAI. Building your own models or creating bespoke models via fine-tuning provides stronger ownership of key assets and more flexibility in terms of the controls you can implement. 

  • Control of model output – An AI foundation model is prone to hallucination risks, as well as propagating biased or harmful behaviour. Data retrieval, model fine-tuning and building your own models might be preferred in high-control environments. Business-critical applications will require a human in the loop.

  • Implementation simplicity – Consuming embedded applications and embedding model APIs has advantages due to their inherent simplicity and time to market. They do not have a significant negative impact in terms of current workflows.

Articulate the AI risk tolerance of each function or business unit

Finalising the AI opportunities the organisation will pursue requires business leaders to articulate the level of risk they are willing to accept related to issues like AI reliability, privacy, explainability and security:

AI reliability

Depending on how it is trained, all AI may be vulnerable to some degree of:

  • Factual inaccuracies or partially true outputs that are wrong on important details​

  • Hallucinations or fabricated outputs

  • Outdated information due to knowledge cutoffs in the training data

  • Biased information in the training data, resulting in biased outputs

AI privacy

Privacy issues vary from the concerns about identifiable details in the training data to sharing data or outputs, including:

  • Sharing user information with third parties without prior notice, including vendors or service providers, affiliates and other users

  • Processing (re)identifiable data

  • Training with (re)identifiable data that can have a real-life impact once in production

  • Sensitive or personal data being unintentionally leaked​

  • Proprietary, sensitive or confidential information entered as prompts or for data retrieval could become part of the knowledge base used in outputs for other users

AI explainability

Machine learning (ML) models are cryptic to users and sometimes even to skilled experts. Though data scientists and model developers understand what their ML models are trying to do, they cannot decipher the internal structure or the algorithmic means by which the models process data. This lack of model understandability and therefore explainability, which Gartner defines as capabilities that clarify a model’s functioning, limits an organisation’s ability to manage AI risk. Lack of explainability makes a model’s outputs:

  • Unpredictable

  • Unverifiable

  • Unaccountable

AI security

AI may become a new target for malicious actors to either access private data or insert code or training parameters to get the AI to act in ways that serve the adversary’s interests. For example:

  • Personal or sensitive information stored by an AI model being accessed by hackers.​

  • Hackers using prompts to manipulate a large language model (LLM) to give away information it should not.

  • LLMs being tricked into writing malware or ransomware codes.​

Work with executive leaders to define their risk ambition

Balancing the risks posed by AI with the opportunities the organisation wants to pursue requires CIOs to help define the relative roles of humans and AI. The goal is to strike a balance between the degree of automation (from fully automated to “human in the loop”) and the degree of explainability (from fully cryptic “black box” AI to fully explainable).

Each CxO needs to declare their acceptable AI risk levels for the major processes in their departments and ensure they align with the AI opportunities they hope to pursue. For example, the head of HR might have a risk tolerance level centred on making the “safest bet” because of the sensitive nature of their work, while the head of customer service might aim for “responsible automation” to allow for automation that can be explained to customers, if required.

Experience Information Technology conferences

Join your peers for the unveiling of the latest insights at the Gartner conferences.

Drive stronger performance on your mission-critical priorities.