Timeline and History of AI
OpenAI was originally focused on developing AI and machine learning tools for video games and other recreational purposes. Less than a year after its official founding on Dec. 11, 2015, it released its first AI offering: an open source toolkit for developing reinforcement learning (RI) algorithms called OpenAI Gym. Over the next two years, OpenAI focused on more general AI development and AI research.
In 2018, OpenAI published a report to explain to the world what a Generative Pre-trained Transformer (GPT) is. A GPT is a neural network, or a machine learning model, created to function like a human brain and trained on input, such as large data sets, to produce outputs -- i.e., answers to users' questions.
In March 2019, OpenAI shifted from nonprofit to for-profit status and became formally known as OpenAI LP, controlled by parent company OpenAI Inc. Almost two years later, in January 2021, OpenAI introduced Dall-E, a generative AI model that analyzes natural language text from human users and then generates images based on what is described in the text.
Perhaps the company's best-known product is ChatGPT, released in November 2022 and heralded as the world's most advanced chatbot for its ability to provide answers to users on a seemingly unlimited range of topics. Its benefits and drawbacks, as well as its uses in various industries, are still being debated.
Elon Musk no longer serves on the board of the company, and co-founder Sam Altman served as the company's CEO until November 2023 alongside president and chairman Greg Brockman, formerly the CTO of financial services and SaaS company Stripe; and chief scientist Ilya Sutskever, formerly of Google.
In November 2023, Altman was fired from his position by the board of directors, citing that Altman was not candid in his communications to the board. Soon after, Brockman left the company. Both were hired by Microsoft three days after leaving the company.
Emmet Shear, co-founder of Twitch, was hired as the interim CEO at OpenAI after Altman’s departure. Following Altman’s firing, approximately 500 of OpenAI’s employees said they would quit if the board of directors didn’t step down. After just five days, Altman and Brockman were re-hired in their original roles at OpenAI with a new board of directors.
Notable projects and releases
OpenAI has been deemed revolutionary for its notable product offerings, which include the following:
OpenAI and Microsoft
At the start of 2023, Microsoft publicly committed to a multibillion-dollar investment in OpenAI, but its interest in the company is nothing new. In July 2019, OpenAI engaged in a multiyear partnership with Microsoft in which Microsoft's cloud platform, Azure, has been enhanced by AI-based computing products.
Microsoft's latest investment in OpenAI extends to Bing, its search engine. The company is using the same technology developed for ChatGPT to produce an AI-infused version of Bing. Concurrently, AI-based features have also been added to Microsoft's Edge browser, and ChatGPT functionality is being added to Microsoft 365 products such as Outlook and Teams.
Criticisms of OpenAI
Despite all these rapid advancements, OpenAI has not been immune to criticism, both in the world of tech and beyond. The company's shift from "nonprofit" to "capped profit" status in 2019 fueled criticism that its commitment to working with others on building "safe and beneficial" general artificial intelligence had become a profit-driven "AI arms race" to produce the most advanced AI technology on the market. Simultaneously, others have expressed concerns about OpenAI's growing lack of transparency into how its groundbreaking products are being developed, given its commitment to developing open source software.
More recently, the debut of ChatGPT in late 2022 has come into a fair deal of criticism alongside the widespread praise for its groundbreaking abilities. The technology has been accused of producing "hallucinations" or other factually inaccurate answers that are ostensibly intelligent and well written, yet don't hold up under scrutiny. While this is perhaps the most infamous drawback of the platform, others include its potential to plagiarize from other sources as well as its limitations in producing answers on the most up-to-date news. The data set it was trained on was from 2021, so the content it generates could disservice those who require information on current events. OpenAI updated ChatGPT Plus in November 2023 to include information up to April of that year.
OpenAI’s chatbots are among many that faced safety concerns early in 2023. Aside from the assistive capabilities of these resources, researchers also detected toxic content in their responses. Examples of these include information on how to construct a bomb, along with guidance on how to perform identity theft and steal from a charity.
International skepticism surrounding AI also continues to emerge. The French and Italian governments, for example, provided demands and assessments for OpenAI. Meanwhile, the U.S. White House requested further information related to the risks associated with AI.
In June, creators faced scrutiny amid a charge from Joseph Saveri Law Firm. Made on behalf of five book authors, this accusation indicated ChatGPT and its underlying large language models (LLMs) -- GPT-3.5 and GPT 4 -- contained copyrighted materials. Specifically, it accused these sources of using the authors' copyrighted works for summaries to train the LLMs. This took place without permission from the authors.
The New York Times also sued OpenAI and Microsoft in December 2023 for copyright infringement, accusing them of illegally copying articles to train LLMs and create AI products that compete with The New York Times. The paper was the first major news organization to sue OpenAI and Microsoft for using their publications to train AI systems.
Among concerns, actions to improve the system often take place. In response to the skepticism surrounding ChatGPT, OpenAI introduced ChatGPT Enterprise in August. With this new version, organizations can have a better hold on model training and the data that exists within models. However, there remains a lack of clarity surrounding the training data used by the model. As such, enterprises have shared concerns about the model using copyrighted material for training.
OpenAI has also faced criticism surrounding lack of diversity on its board of directors. Critics noted the board’s lack of representation isn't in line with the company’s mission to “benefit all of humanity.” Following the firing and rehiring of Sam Altman in November 2023, OpenAI ousted its only two female board members and reinstated a board made up exclusively white men. Lawmakers in Washington also recommended that OpenAI diversify its board following the restructuring.
The Future of OpenAI
OpenAI has not provided extensive public commentary on future plans, but based on recent investments, democratization of AI is a clear goal of the Microsoft-OpenAI partnership, as nontechnical professionals should soon have more AI tools at their disposal that do not require AI expertise.
In March 2023, OpenAI released the company's newest upgrade in language model technology since GPT-3.5, the foundation for ChatGPT: GPT-4. GPT-4 has been labeled superior to its predecessors because it delivers multimodal AI functionality, where it can analyze not just text but also images. Given OpenAI's most recent releases, it's clear the company is giving no indication of slowing down. OpenAI projects it will surpass $1 billion in revenue by 2024.
Microsoft has also taken actions that seem to indicate the expected growth of OpenAI and similar resources. Earlier in 2023, the company announced an investment of more than $13 billion in OpenAI. With the goal of sustaining the use of AI for various purposes, the investment gained a large amount of support following its comparison to the internet revolution.
In the 1990s, Bill Gates released a memo that described the internet as a "tidal wave" that would have a large impact on Microsoft. While referencing this memo, Microsoft CEO Satya Nadella recently noted the similarities between internet and AI growth. Furthermore, Microsoft is aiming to use these tools to support innovation.
In parallel with its anticipated growth, OpenAI hosted its first ever developer conference in November 2023. At the event, OpenAI unveiled GPT-4 Turbo, a language model with a significantly larger context window than its predecessors, a cheaper API pricing model and a later training data cut. OpenAI also debuted customizable GPTs, a “Copyright Shield” that will protect customers from legal action, and GPT store where users can monetize and access custom GPTs.
In December 2023, OpenAI struck a deal with media company Axel Springer to use its news content in OpenAI’s products. This lets ChatGPT give news summaries from Axel Springer’s outlets, which include Politico and Business Insider. The deal shows OpenAI’s intent to explore opportunities in AI-powered journalism.
Stay one step ahead of the AI landscape
Explore the technology that’s redefining human-computer interaction. This eBook will give you a thorough yet concise overview of the latest breakthroughs in natural language processing and large language models (LLMs). It’s designed to help you make sense of models such as GPT-4, Dolly and ChatGPT, whether you’re a seasoned data scientist or a business stakeholder with little technical training.
Download your copy to learn:
Generative AI is a broad term that can be used for any AI system whose primary function is to generate content. This is in contrast to AI systems that perform other functions, such as classifying data (e.g., assigning labels to images), grouping data (e.g., identifying customer segments with similar purchasing behavior), or choosing actions (e.g., steering an autonomous vehicle).
Typical examples of generative AI systems include image generators (such as Midjourney or Stable Diffusion), large language models (such as GPT-4, PaLM, or Claude), code generation tools (such as Copilot), or audio generation tools (such as VALL-E or resemble.ai). (Disclosure: I serve in an uncompensated capacity on the non-profit board of directors for OpenAI, the company behind GPT-4.)
Using the term “generative AI” emphasizes the content-creating function of these systems. It is a relatively intuitive term that covers a range of types of AI that have progressed rapidly in recent years.
Large language models (LLMs) are a type of AI system that works with language. In the same way that an aeronautical engineer might use software to model an airplane wing, a researcher creating an LLM aims to model language, i.e., to create a simplified—but useful—digital representation. The “large” part of the term describes the trend towards training language models with more parameters.1 A key finding of the past several years of language model research has been that using more data and computational power to train models with more parameters consistently results in better performance. Accordingly, cutting-edge language models trained today might have thousands or even millions of times as many parameters as language models trained ten years ago, hence the description “large.”
Typical examples of LLMs include OpenAI’s GPT-4, Google’s PaLM, and Meta’s LLaMA. There is some ambiguity about whether to refer to specific products (such as OpenAI’s ChatGPT or Google’s Bard) as LLMs themselves, or to say that they are powered by underlying LLMs.
As a term, LLM is the most specific of the three discussed here and is often used by AI practitioners to refer to systems that work with language. Nonetheless, it is still a somewhat vague descriptor. It is not entirely clear what should and shouldn’t count as a language model—does a model trained on programming code count? What about one that primarily works with language, but can also take images as inputs? There is also no established consensus on what size of model should count as “large.”
Foundation model is a term popularized by an institute at Stanford University. It refers to AI systems with broad capabilities that can be adapted to a range of different, more specific purposes. In other words, the original model provides a base (hence “foundation”) on which other things can be built. This is in contrast to many other AI systems, which are specifically trained and then used for a particular purpose.
Typical examples of foundation models include many of the same systems listed as LLMs above. To illustrate what it means to build something more specific on top of a broader base, consider ChatGPT. For the original ChatGPT, an LLM called GPT-3.5 served as the foundation model. Simplifying somewhat, OpenAI used some chat-specific data to create a tweaked version of GPT-3.5 that was specialized to perform well in a chatbot setting, then built that into ChatGPT.
At present, “foundation model” is often used roughly synonymously with “large language model” because language models are currently the clearest example of systems with broad capabilities that can be adapted for specific purposes. The relevant distinction between the terms is that “large language models” specifically refers to language-focused systems, while “foundation model” is attempting to stake out a broader function-based concept, which could stretch to accommodate new types of systems in the future.
What exactly are the differences between generative AI, large language models, and foundation models? This post aims to clarify what each of these three terms mean, how they overlap, and how they differ.
Let’s start with generative AI. ChatGPT’s ability to spit out uncannily human-sounding new content probably comes to mind.
Generative AI can be defined as artificial intelligence focused on creating models with the ability to produce original content, such as images, music, or text. By ingesting vast amounts of training data, generative AI models can employ complex machine-learning algorithms in order to understand patterns and formulate output. Their techniques include recurrent neural networks (RNNs) and generative adversarial networks (GANs). In addition, a transformer architecture (denoted by the T in ChatGPT) is a key element of this technology.
An image-generation model, for instance, might be trained on a dataset of millions of photos and drawings to learn the patterns and characteristics that make up diverse types of visual content. And in the same way, music- and text-generation models are trained on massive collections of music or text data, respectively.
Key examples of generative AI models include:
LLMs are a specialized class of AI model that uses natural language processing (NLP) to understand and generate humanlike text-based content in response. Unlike generative AI models, which have broad applications across various creative fields, LLMs are specifically designed for handling language-related tasks. Their varieties include adaptable foundation models.
These large models achieve contextual understanding and remember things because memory units are incorporated in their architectures. They store and retrieve relevant information and can then produce coherent and contextually accurate responses.
Examples of LLMs include:
Now that you have an idea of how generative AI and large language model technology works in some real-world areas, here’s something else to think about: when they’re utilized together, they can enhance various applications and unlock some exciting possibilities. These include:
LLMs and generative AI models can produce original, contextually relevant creative content across domains including images, music, and text. For example, a generative AI model trained on a dataset of paintings can be enhanced by an LLM that “understands” art history and can generate descriptions and analyses of artwork.
This content-generation combo is a boon for ecommerce, among other industries. No matter what your online store sells, the technology can generate compelling marketing images and phrasing that helps your brand better engage shoppers. Whether you post AI-aided content on social media or on your site, it can help you more quickly win over customers and increase your sales.
By drawing on both generative AI and LLMs, you can expertly personalize content for individual shoppers. LLMs can make sense of shopper preferences and generate personalized recommendations in response, while generative AI can create customized content based on the preferences, including targeted product recommendations, personalized content, and ads for items that could be of interest.
LLMs can enhance the conversational abilities of bots and assistants by incorporating generative AI techniques. LLMs provide context and memory capabilities, while generative AI enables the production of engaging responses. This results in more natural, humanlike, interactive conversations. Again, this technology refinement can ultimately help improve shopper satisfaction.
Large language models can be combined with generative AI models that work with other modalities, such as images or audio. This allows for generation of multimodal content, with the AI system being able to create text descriptions of images or create soundtracks for videos, for instance. By combining language-understanding strengths with content generation, AI systems can create richer, more immersive content that grabs the attention of shoppers and other online prospects.
When combined with generative AI, LLMs can be harnessed to create stories and narratives. Human writers can provide prompts and initial story elements, and the AI system can then generate subsequent content, all while maintaining coherence and staying in context. This collaboration opens up online retail possibilities that can streamline the products and services lifecycle and boost ROI.
LLMs can be utilized alongside generative AI models to improve content translation and localization. A large language model can decipher the nuances of language, while generative AI can create accurate translations and localized versions of the content. This combination enables more-accurate, contextually appropriate translations in real time, enhancing global communication and content accessibility.
Both large language models and generative AI models can generate concise summaries of long-form content. Their strengths: LLMs can assess the context and key points, while generative AI can develop condensed versions of the text that capture the essence of the original material. This ensures efficient information retrieval and lets people quickly grasp the main ideas laid out in lengthy documents.
No, there won’t be a quiz. But we hope this blog post has helped you grasp the basics of what’s going on behind the scenes of these two budding technologies.
Cloud Computing
No need to build your own IT infrastructure for peak needs Cloud Computing
XaaS (what can be rented?)
Cloud Usages
Companies with large datacenters, often already running large-scale software
Amazon AWS
Amazon Web Service (AWS)
Microsoft Azure
Google Cloud Platform (GCP)
Latest among the three to come in play and smallest market share, but with good growth
Multi-Cloud
Private/On-Premise Cloud
IBM Cloud ~OpenShift cloud
OpenShift Container Platform (formerly known as OpenShift Enterprise) is Red Hat's on-premises private platform as a service product, built around application containers powered by CRI-O, with orchestration and management provided by Kubernetes, on Red Hat Enterprise Linux and Red Hat Enterprise Linux CoreOS.
OpenShift Container Platform
OpenShift Dedicated
Red Hat OpenShift Service on AWS
Azure Red Hat OpenShift
Red Hat Advanced Cluster Security for Kubernetes
Select version
OpenShift Online
OpenShift Kubernetes Engine
Virtualization
• Traditional: applications run on physical servers – Manual mapping of apps to servers
• Apps can be distributed
• Storage may be on a SAN or NAS – IT admins deal with “change”
• Modern: virtualized data centers – App run inside virtual servers; VM mapped onto physical servers – Provides flexibility in mapping from virtual to physical resources Virtualization Benefit
• Resource management is simplified – Application can be started from preconfigured VM images / appliances – Virtualization layer / hypervisor permits resource allocations to be varied dynamically – VMs can be migrated without application down-time Virtual Datacenter
• A cluster of machines, each running a set of VMs – drive up utilization by packing many VMs onto each cluster node – fault recovery is simplified
• if hardware fails, copy VM image elsewhere
• if software fails, restart VM from snapshot – can safely allow third parties to inject VM images into your data center
• hosted VMs in the cloud, commercial computing grids Recent Trend: Container
• Light-weight virtualization – Running multiple isolated user-space applications on one OS – Virtualization layer runs as an application within the OS – Focusing on performance isolation
• Example: Docker, LXC, Kubernetes, Xen Unikernel More in the rest of this quarter Software-Defined Data Center
• All infrastructure is virtualized and delivered as a service & the control of this datacenter is entirely automated by software
Traditional Data Center.
Software-Defined Network (SDN)
• A network in which the control plane is physically separate from the data plane and
• A single (logically centralized) control plane controls several forwarding devices.
Inside the “Network”
• Closed equipment – Software bundled with hardware – Vendor-specific interfaces
• Over specified – Slow protocol standardization
• Few people can innovate – Equipment vendors write the code – Long delays to introduce new features Impacts performance, security, reliability, cost… Networks are Hard to Manage
• Operating a network is expensive – More than half the cost of a network – Yet, operator error causes most outages
• Buggy software in the equipment – Routers with 20+ million lines of code – Cascading failures, vulnerabilities, etc.
• The network is “in the way” – Especially a problem in data centers – … and home networks
What is on the blockchain?
Blockchain defined: Blockchain is a shared, immutable ledger that facilitates the process of recording transactions and tracking assets in a business network. An asset can be tangible (a house, car, cash, land) or intangible (intellectual property, patents, copyrights, branding).
Blockchain Overview:
Definition: Blockchain is a decentralized, distributed digital ledger that records transactions across a network of computers. It ensures transparency and security through consensus mechanisms [3][6].
Functionality: Utilizing advanced database mechanisms, blockchain allows transparent information sharing within business networks, minimizing the need for intermediaries [4][9].
Immutable Ledger: The technology maintains an immutable record of transactions, enhancing trust and preventing tampering.
Applications: Beyond cryptocurrencies, blockchain finds applications in various sectors, such as finance, supply chain, and healthcare.
In summary, blockchain revolutionizes data handling, offering secure, transparent, and efficient solutions across diverse industries.
Why is blockchain so popular?
Blockchain enables secure and transparent data sharing among multiple parties. Instead of relying on centralized servers, blockchain-based platforms allow participants to directly exchange data while maintaining control over their own data privacy and security.
Blockchain and Cryptocurrency Overview:
Bitcoin, Cryptocurrency, and Blockchain Distinctions:
Cryptocurrency Characteristics:
Blockchain Beyond Cryptocurrency:
Key Features of Blockchain:
Always refer to authoritative sources for in-depth exploration and understanding.
Introducting - Crypto BAR ~ Transactional ( Precious metal based currencies ) Block Chain Tehnology.
More to come ...
The ASA Group of companies are registered under the flagship company ASA Group LLC.
Come and see the positive side with us ..
Our companies across various verticals, making it a global conglomerate with a substantial presence in different sectors. Revenue contributions for Social upliftment of social deprived societies.
Researching the origion of Universe AUM.
'A' referring to Brahma, 'U' to Vishnu, and 'M' to Mahadev
|
|