A recent survey by SolarWinds, consisting of insights from nearly 700 enterprise technologists, details apprehensions regarding AI readiness.

Less than half of these professionals believe their company’s internal databases are prepared for AI deployment. This skepticism stems from worries about data quality, consistency, and integration capabilities that are essential for effective AI implementation.

More than one-third of respondents specifically cite concerns about the data used to train large language models (LLMs).

Concerns include the accuracy, bias, and comprehensiveness of the training data. Given that LLMs require vast amounts of data to function optimally, any deficiencies in the underlying data can significantly impair AI performance and outcomes.

The quality and integrity of training data are critical as they directly influence the reliability and trustworthiness of AI outputs.

Big bets on AI

Despite these reservations, companies are making substantial financial commitments to AI technologies. Nearly two-thirds of surveyed organizations have invested $5 million or more in emerging AI capabilities.

Almost one-third of companies have allocated over $25 million towards AI advancements.

This level of investment underscores a strategic prioritization of AI, reflecting its perceived importance in gaining competitive advantage.

Investments must be in both technology acquisition and talent, infrastructure, and the development of AI-driven solutions tailored to specific business needs.

AI optimism meets reality

IT professionals are optimistic about the potential of large language models (LLMs). These models are being leveraged to automate various IT processes, significantly reducing manual workloads and operational inefficiencies.

Employing machine learning means companies can improve their ability to detect anomalies, which helps in identifying and addressing technical issues before they escalate.

LLMs also play a key role in predictive maintenance and real-time problem-solving, thereby improving system reliability and uptime.

For example, automated scripts generated by LLMs can handle routine tasks such as system updates and security patching, allowing IT teams to focus on more strategic initiatives.

Integration of LLMs in IT operations is seen as a key enabler for building smarter, more resilient technology infrastructures.

Balancing AI’s growth with security worries

The swift progression of generative AI technologies has prompted significant concerns about data quality and security among IT practitioners.

While nearly 90% of IT staff and leaders maintain a positive outlook on AI, more than half express the need for concrete evidence of its benefits. Demand for tangible proof is driven by a desire to validate AI’s effectiveness in delivering measurable business value.

Supporting this, nearly half of the surveyed professionals have encountered negative experiences with AI, predominantly related to data privacy and security issues.

Negative experiences highlight the vulnerabilities associated with AI deployment, such as unauthorized access to sensitive data, breaches, and the potential misuse of AI-generated information. The rapid adoption of AI also raises ethical considerations, particularly regarding the transparency and accountability of AI decision-making processes.

The dual challenge of leveraging AI’s capabilities while mitigating its risks remains a top priority for IT leaders.

Building for the future

Krishna Sai, SVP of Engineering at SolarWinds, stresses the importance of developing AI as a sustainable solution rather than a fleeting trend.

It’s easy to get caught up in the excitement of new innovations.

Without a thoughtful and comprehensive strategy, AI implementations can quickly become obsolete or fail to deliver long-term value.

A sustainable AI strategy involves several key components:

  • Data Integrity and quality: Ensuring that the data used to train and operate AI systems is accurate, complete, and free from biases is essential. Poor data quality can lead to flawed AI outcomes, reducing trust in the technology.
  • Scalability: AI solutions should be designed to scale with the organization’s growth. This means having the infrastructure and resources in place to support increasing data volumes and more complex AI models over time.
  • Ethics and governance: Establishing robust governance frameworks that address ethical considerations, data privacy, and compliance with regulations is crucial. This helps to mitigate risks and fosters a responsible AI environment.
  • Continuous improvement: AI technologies and methodologies are constantly evolving. A sustainable AI strategy includes mechanisms for regular updates, retraining of models, and incorporating the latest advancements to stay competitive.

Overcoming security roadblocks and adoption fears

Organizations are also using AI to optimize their IT infrastructure. This includes managing cloud resources, balancing workloads, and maintaining optimal performance of applications and services.

Over one-third of businesses are currently utilizing AI tools to streamline IT operations.

Security remains a primary concern for IT professionals when it comes to adopting AI. Nearly one-quarter of IT professionals identify security issues as the most significant barrier to AI adoption.

Concerns revolve around the potential vulnerabilities that AI systems can introduce, such as data breaches, unauthorized access, and misuse of AI-generated information.

To address these concerns, a majority (72%) of survey respondents advocate for increased government regulation to enhance AI security. They believe that regulatory frameworks can provide clear guidelines and standards for AI development and deployment, which can help mitigate risks and build trust in AI technologies.

Security measures that organizations can implement include:

  • Robust encryption: Protecting data at rest and in transit using advanced encryption techniques.
  • Access controls: Implementing strict access controls to ensure that only authorized personnel can access sensitive data and AI systems.
  • Regular audits: Conducting regular security audits and assessments to identify and address vulnerabilities.
  • Incident response plans: Developing and maintaining comprehensive incident response plans to quickly address any security breaches or issues.

Insights you need to know

The survey reveals substantial investments in AI across various organizations:

Nearly 66% of companies have invested $5 million or more in AI capabilities and about 33% of companies have allocated over $25 million to AI technologies.

These figures indicate a strong belief in AI’s potential to drive business value and innovation.

The positive sentiment surrounding AI

Nearly 90% of IT staff and leaders hold a positive opinion of AI, reflecting widespread optimism about its potential benefits.

The call for concrete AI evidence

Despite the positive sentiment, over 50% of respondents express the need for more tangible proof of AI’s benefits. Demands for evidence shows the importance of demonstrating clear, measurable outcomes from AI investments.

When AI goes wrong

Nearly 50% of surveyed professionals have experienced negative incidents with AI, primarily related to data privacy and security issues.

Negative experiences show the challenges and risks associated with AI implementation.

A significant 72% of respondents support increased government regulation to address security concerns. This support suggests a desire for clear standards and guidelines to mitigate risks and enhance the safe deployment of AI technologies.

Key takeaway

Companies are pouring millions into AI capabilities, driven by the potential of large language models to transform IT operations. However, concerns about data readiness, security, and the need for tangible proof of AI benefits persist.

Alexander Procter

July 3, 2024

6 Min