Artificial Intelligence and Generative AI

Tags cloud AI

Artificial Intelligence

Artificial Intelligence, or AI, refers to the implementation of statistical data interpretation to application decision making. Often, AI is viewed as an omniscient tool that can access knowledge from all domains and operate in the real world using a deep understanding of fundamental knowledge. However, the reality of AI reflects challenges faced by human developers in aggregating data, acquiring computing resources and discernment of  information quality. Artificial intelligence applications are typically limited by restrictions underpinning their development, funding, knowledge and environment.

General Artificial Intelligence, also known as Strong AI, refers to a type of artificial intelligence that has the ability to understand, learn, and apply knowledge across a wide range of tasks at a level equivalent to that of a human being. It implies a machine's ability to transfer knowledge from one domain to another and self-improve through learning. However, it's important to note that no such AI currently exists that can fully match human intelligence across all domains.

Methods used to build machine learning models are still valuable when selectively applied to their specific domain. Models are built to specialize on different domains of expertise.

For example, popular AI chatbot ChatGPT uses a large collection of model attributes to determine the probability of text output in a conversation. The model is trained entirely on text input and generates data based on the likelihood of each word that is generated. Because the text output is not based on an intuitive understanding of the content it ingests and is instead based on probable outputs, this makes the tool unreliable as a sole source of truth for investigators. However, ChatGPT excels in natural language processing tasks where operations are not based on understandings outside of related text. Read section "Prompt Engineering" for question examples and additional insights on effective use of AI language tools.

Foundational Models

Foundational Models or FMs are machine learning models trained on broad sets of unlabeled data. Machine learning models are developed to gain context into relevance into a specific attribute and can be combined with other models to integrate new knowledge and derive new skills. Most of these smaller models begin with specific datasets (topics, speech patterns, semantics, etc.) to recognize sets of trends and are combined into Foundational Models as a starting point for applications using machine learning . After packaging, they can be adapted to a wide range of downstream tasks and fine-tuned for an array of applications.

These models are typically trained by groups independent from the University and can operate unexpectedly, harbor biases and exclude current information. It is important not to confuse the statistically determined output of a machine learning model with verified research based on fact-checking and true understanding. The use of external Foundational Models in investigative discovery requires the investigator to exercise critical thinking and verification of information at all levels of their application and follow institutional guidelines with respect to incorporated acceptable use, data privacy and publishing. 

Generative AI

Generative AI refers to AI models used to create new outputs using contextual information from a program or user. Common forms of generative AI include code, image & video generators, as well as conversational chatbots. These models usually comprise of foundational models and tuning to label and interpret sets of data. Often, the models work to convert one medium of information into another, including but not limited to: converting a description into a new image, describing images to an end user, re-interpretation of text to specific bullet points, converting dialogue text into audio clips based on recorded voices, and so on.

Generative pre-trained transformers or GPTs rely on input tokens for transformation into a new medium of choice. When a machine learning model can translate between different mediums of communication, these models are referred to as multi-modal AI.

Large language models (LLMs) represent a class of artificial intelligence that focuses on natural language processing and providing relevant responses after analyzing bodies of text.

Existing models primarily use natural language processing and machine learning techniques to analyze large amounts of text data from various sources, including the greater internet, libraries, archive centers and more. By doing so, they can uncover patterns and connections in bodies of text that may not be as easily accessible to a human researcher.

As infrastructure, artificial intelligence will enable automation of generative and interactive tasks, including software development, concierge services, text extraction and more. Enabling access to AI products and tools will be instrumental to effectively utilize public resources and reduce administrative overhead and delays.

Cloud Platforms for Generative AI

UT Dallas OIT makes a variety of private cloud services available to the University's research and admin groups, including Microsoft Azure, Amazon Web Services and Google Cloud Platforms.

A wide variety of artificial intelligence applications are available from our service providers and can enable any compute need. Apps vary in function and scope of AI application, exploring the world of natural language processing (NLP), computer vision and generative AI as managed services. Selecting the best product for your use case depends on multiple factors, including cost, availability, existing projects, compatibility and expertise.

OIT maintains an Enterprise Agreement with Amazon, Microsoft and Google for educational pricing and privacy standards via Texas DIR and BAA contracts. Pricing estimations and opportunities to consult the Office of Information Technology are available by submitting a cloud hosting request. This is discussed under Getting Started With Generative AI.

Generative AI with Amazon Web Services and Bedrock

Amazon Web Services provides highly available, scalable, flexible cloud-based computing services for organizations and investigators to meet ever-changing computing requirements. Amazon Bedrock is a customizable generative AI platform and is available as a fully managed service from AWS. It offers a choice of high-performing Foundational Models from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI and Amazon's Titan foundational model.

These foundational models generally provide application program interfaces (APIs) along with a variety of capabilities to build generative AI applications. Services from AWS also provide the associated security and guardrails that insure responsible use of data with AI that aligns with evolving UT Dallas Artificial Intelligence guidelines and policies.

Amazon Bedrock provides the end user with the ability to evaluate multiple foundational models for a use case and supports model customization with custom private data. With fine-tuning and Retrieval Augmented Generation (RAG) techniques, the user can create customized agents to execute tasks using targeted enterprise systems and infrastructure.  This helps ensure the privacy and exclusivity of investigator-specific information and ensures that organizational control policies and guardrails are enforced.

Amazon Bedrock is a serverless technology, meaning most users will not need to manage any IT infrastructure. Users can securely integrate and deploy generative AI tasks and solutions into existing applications using AWS services already part of active investigations.

Generative AI with Microsoft Azure and OpenAI

Microsoft provides many institutional services to the university through Office 365, Active Directory and Azure. Microsoft Azure gives users an environment to run compute & storage workloads online without on premises infrastructure. It acts as a developer gateway to Microsoft services and enables cost-effective automation & microservices.

Azure OpenAI Service

Following acquisition of OpenAI, Microsoft has chosen to release new large language models as deployable services within Microsoft Azure subscriptions. These tools are currently available for use by institutions with an Enterprise Agreement with Microsoft.

Available AI tools from Microsoft include: Azure Machine Learning, Azure OpenAI Service (ChatGPT, GPT4, GPT4.5); AI Speech, Language, and Vision; Cognitive Search and others.

Complete list of Microsoft AI services: https://azure.microsoft.com/en-us/products/ai-services

Prompt Engineering

Due to the input token requirement and differences between processing layers of GPT models, strategies for crafting the best possible input for a model are required to effectively leverage the knowledge and capabilities of the model used. Each model has different input best practices, requirements and restrictions.

OIT recommends reviewing prompt engineering methods for your platform of choice:

Details

Article ID: 1184
Created
Mon 2/5/24 11:41 AM
Modified
Fri 2/9/24 1:45 PM