Large Language Models versus Wall Street: Can AI enhance your financial investment decisions?

How do you determine which stocks to buy, sell, or hold? This is a complex question that requires considering multiple factors: geopolitical events, market trends, company-specific news, and macroeconomic conditions. For individuals or small to medium businesses, taking all these factors into account can be overwhelming. Even large corporations with dedicated financial analysts face challenges due to organizational silos or lack of communication.

Inspired by the success of GPT-4’s reasoning abilities, researchers from Alpha Tensor Technologies Ltd., the University of Piraeus, and Innov-Acts have developed MarketSenseAI, a GPT-4-based framework designed to assist with stock-related decisions—whether to buy, sell, or hold. MarketSenseAI provides not only predictive capabilities and a signal evaluation mechanism but also explains the rationale behind its recommendations.

The platform is highly customizable to suit an individual’s or company’s risk tolerance, investment plans, and other preferences. It consists of five core modules:

  1. Progressive News Summary – Summarizes recent developments in the company or sector, alongside past news reports.
  2. Fundamentals Summary – Analyzes the company’s latest financial statements, providing quantifiable metrics.
  3. Macroeconomic Summary – Examines the macroeconomic factors influencing the current market environment.
  4. Stock Price Dynamics – Analyzes the stock’s price movements and trends.
  5. Signal Generation – Integrates the information from all the modules to deliver a comprehensive investment recommendation for a specific stock, along with a detailed rationale.

This framework serves as a valuable assistant in the decision-making process, empowering investors to make more informed choices. Integrating AI into investment decisions offers several key advantages: it introduces less bias compared to human analysts, efficiently processes large volumes of unstructured data, and identifies patterns, outliers, and discrepancies that traditional analysis might overlook.

Ramsay Santé Optimizes Operations with Novelis

Customer Order Automation: A Successful Project to Transform Processes 

Novelis attends the Artificial Intelligence Expo of Ministry of the Interior

On October 8th, 2024, Novelis will participate in the Artificial Intelligence Expo of the Digital Transformation Direction of the Ministry of the Interior.

This event, held at the Bercy Lumière Building in Paris, will immerse you in the world of AI through demonstrations, interactive booths, and immersive workshops. It’s the perfect opportunity to explore the latest technological advancements that are transforming our organizations!

Join Novelis: Turning Generative AI into a Strength for Information Sharing

We invite you to discover how Novelis is revolutionizing the way businesses leverage their expertise and share knowledge through Generative AI. At our booth, we will highlight the challenges and solutions for the reliable and efficient transmission of information within organizations.

Our experts – El Hassane Ettifouri, Director of Innovation; Sanoussi Alassan, Ph.D. in AI and Generative AI Specialist; and Laura Minkova, Data Scientist – will be present to share their insights on how AI can transform your organization.

Don’t miss this opportunity to connect with us and enhance your company’s efficiency!

[Webinar] Take the Guesswork Out of Your Intelligent Automation Initiatives with Process Intelligence 

Are you struggling to determine how to kick-start or optimize your intelligent automation efforts? You’re not alone. Many organizations face challenges in deploying automation and AI technologies effectively, often wasting time and resources. The good news is there’s a way to take the guesswork out of the process: Process Intelligence

Join us on September 26 for an exclusive webinar with our partner ABBYY, Take the Guesswork Out of Your Intelligent Automation Initiatives Using Process Intelligence. In this session, Catherine Stewart, President of the Americas at Novelis, will share her expertise on how businesses can use process mining and task mining to optimize workflows and deliver real, measurable impact.  

Why You Should Attend 

Automation has the potential to transform your business operations, but without the right approach, efforts can easily fall flat. Catherine Stewart will draw from her extensive experience leading automation initiatives to reveal how process intelligence can help businesses achieve efficiency gains, reduce bottlenecks, and ensure long-term success. 

Key highlights: 

  • How process intelligence can provide critical insights into how your processes are performing and where inefficiencies lie. 
  • The role of task mining in capturing task-level data to complement process mining, providing a complete view of your operations. 
  • Real-world examples of how Novelis has helped clients optimize their automation efforts using process intelligence, leading to improved efficiency, accuracy, and customer satisfaction. 
  • The importance of digital twins for simulating business processes, allowing for continuous improvements without affecting production systems. 

Graphical user interface agents optimization for visual instruction grounding using multi-modal Artificial Intelligence systems

Discover the first version of our scientific publication “Graphical user interface agents optimization for visual instruction grounding using multi-modal artificial intelligence systems” published in arxiv and submitted to the Engineering Applications of Artificial Intelligence journal. This article is already available to the public.

Thanks to the Novelis research team for their know-how and expertise.

Abstract

Most instance perception and image understanding solutions focus mainly on natural images. However, applications for synthetic images, and more specifically, images of Graphical User Interfaces (GUI) remain limited. This hinders the development of autonomous computer-vision-powered Artificial Intelligence (AI) agents. In this work, we present Search Instruction Coordinates or SIC, a multi-modal solution for object identification in a GUI. More precisely, given a natural language instruction and a screenshot of a GUI, SIC locates the coordinates of the component on the screen where the instruction would be executed. To this end, we develop two methods. The first method is a three-part architecture that relies on a combination of a Large Language Model (LLM) and an object detection model. The second approach uses a multi-modal foundation model.

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Novelis sponsors Chief AI Officer USA Exchange in Florida

The Chief AI Officer USA Exchange event, scheduled for May 1st and 2nd, 2024, is an exclusive, invitation-only gathering held at the Le Méridien Dania Beach hotel in Fort Lauderdale, Florida. Tailored for executives from C-Suite to VP levels, it aims to simplify the complexities of Artificial Intelligence.

The world of AI is evolving at an unprecedented pace, offering unparalleled opportunities while presenting significant challenges. In this complex landscape, the role of this event becomes crucial for guiding businesses through the intricacies of AI, maximizing its benefits while cautiously navigating to avoid ethical pitfalls and privacy concerns.

On the agenda: 

  • Role of the Chief AI Officer
  • AI’s relationship to Privacy and Data Governance
  • Realistic application of GenAI
  • Government Communication and Regulation
  • Creating In-house vs Out-of-house AI/ML solutions
  • Strategic implementation and enterprise transformation
  • Cybersecurity in AI
  • Sustainability in AI

Unique aspects of the exchange:  

  • Exclusive Network: Select gathering of C-level to VP executives in AI and emerging tech. Invitation-only for diverse, industry-relevant discussions.
  • Tailored Content: Leveraging five+ years of data for custom content from a varied panel of experts.
  • Selected Vendors: Sponsors chosen to address contemporary challenges, enhancing participant experience.

Novelis stands out as an expert in Automation and GenAI, possessing expertise in the synergistic integration of these two fields. By merging our deep knowledge of automation with the latest advancements in GenAI, we provide our partners and clients with unparalleled expertise, enabling them to navigate confidently through the complex AI ecosystem.

Novelis will be represented by Catherine Stewart, President and General Manager for the Americas, Walid Dahhane the CIO & Co-Founder and Paul Branson, Director of Solution Engineering.

The event represents a peerless platform for defining emerging roles in AI, discussing relevant case studies, and uncovering proven strategies for successful AI integration in businesses. Join us to discuss AI and Automation together!

AI in Time Series Forecasting

Discover the application of AI for efficiently utilizing data from temporal series forecasts.

CHRONOS – Foundation Model for Time Series Forecasting

Time series forecasting is crucial for decision-making in various areas, such as retail, energy, finance, healthcare, and climate science. Let’s talk about how AI can be leveraged to effectively harness such crucial data.
The emergence of deep learning techniques has challenged traditional statistical models that dominated time series forecasting. These techniques have mainly been made possible by the availability of extensive time series data. However, despite the impressive performance of deep learning models, there is still a need for a general-purpose “foundation” forecasting model in the field.

Recent efforts have explored using large language models (LLMs) with zero-shot learning capabilities for time series forecasting. These approaches prompt pretrained LLMs directly or fine-tune them for time series tasks. However, they all require task-specific adjustments or computationally expensive models.

With Chronos, presented in the new paper “Chronos: Learning the Language of Time Series“, the team at Amazon takes a novel approach by treating time series as a language and tokenizing them into discrete bins. This allows off-the-shelf language models to be trained on the “language of time series” without altering the traditional language model architecture.

Pretrained Chronos models, ranging from 20M to 710M parameters, are based on the T5 family and trained on a diverse dataset collection. Additionally, data augmentation strategies address the scarcity of publicly available high-quality time series datasets. Chronos is now the state-of-the-art in-domain and zero-shot forecasting model, outperforming traditional models and task-specific deep learning approaches.

Why is this essential? As a language model operating over a fixed vocabulary, Chronos integrates with future advancements in LLMs, positioning it as an ideal candidate for further development as a generalist time series model.

Multivariate Time Series – A Transformer-Based Framework for Multivariate Time Series Representation Learning

Multivariate time series (MTS) data is common in various fields, including science, medicine, finance, engineering, and industrial applications. It tracks multiple variables simultaneously over time. Despite the abundance of MTS data, labeled data for training models remains scarce. Today’s post presents a transformer-based framework for unsupervised representation learning of multivariate time series by providing an overview of a research paper titled “A Transformer-Based Framework for Multivariate Time Series Representation Learning,” authored by a team from IBM and Brown University. Pre-trained models generated from this framework can be applied to various downstream tasks, such as regression, classification, forecasting, and missing value imputation.

The method works as follows: the main idea of the proposed approach is to use a transformer encoder. The transformer model is adapted from the traditional transformer to process sequences of feature vectors that represent multivariate time series instead of sequences of discrete word indices. Positional encodings are incorporated to ensure the model understands the sequential nature of time series data. In an unsupervised pre-training fashion, the model is trained to predict masked values as part of an autoregressive denoising task where some input is hidden.

Namely, they mask a proportion of each variable sequence in the input independently across each variable. Using a linear layer on top of the final vector representations, the model tries to predict the full, uncorrupted input vectors. This unsupervised pre-training approach leverages the same labeled data samples, and in some cases, it demonstrates performance improvements even when compared to the fully supervised methods. Like any transformer architecture, the pre-trained can be used for regression and classification tasks by adding output layers.

The paper introduces an interesting approach to using transformer-based models for effective representation learning in multivariate time series data. When evaluated on various benchmark datasets, it shows improvements over existing methods and outperforms them in multivariate time series regression and classification. The framework demonstrates superior performance even with limited training samples while maintaining computational efficiency.

AI in industrial infrastructures

Discover the recent advances in the application of AI to industrial infrastructures.

Overview of Predictive maintenance of pumps in civil infrastructure using AI

Predictive maintenance (PdM) is a proactive maintenance strategy that leverages data-driven analysis, analytics, artificial intelligence (AI) methods, and advanced technologies to predict when equipment or machinery is likely to fail. An example of predictive maintenance using AI techniques is in civil infrastructure, particularly in the upkeep of pumps.

Three main maintenance strategies are applied to pumps in civil infrastructure: corrective maintenance, preventive maintenance, and predictive maintenance (PdM). Corrective maintenance involves diagnosing, isolating, and rectifying pump faults after they occur, aiming to restore the failed pump to a functional state. Preventive maintenance adheres to a predefined schedule, replacing deteriorated pump parts at regular intervals, irrespective of whether they require replacement. In contrast, to overcome the drawbacks of corrective and preventive maintenance approaches, PdM utilizes data-driven analysis. The process involves continuous monitoring of real-time data from machinery. By employing sensors to gather information like vibration, temperature, and other relevant metrics, the system establishes a baseline for normal operational conditions. Machine learning algorithms then analyze this data, identifying patterns and anomalies indicative of potential issues or deterioration.

A cutting-edge advancement in technology is the ADT-enabled Predictive Maintenance (PdM) framework designed specifically for Wastewater Treatment Plant (WWTP) pumps.

Why is this essential? This technology is important because predictive maintenance of pumps in civil infrastructure, powered by AI, prevents unexpected failures. It enhances system reliability, reduces downtime, and optimizes resource allocation. Detecting issues early through data-driven analysis ensures efficient and resilient operations, which is crucial for the functionality of vital infrastructure components.

LCSA – Machine Learning Based Model

Artificial Intelligence for Smarter, More Sustainable Building Design

In the last couple of years, the field of artificial intelligence (AI) has had an influence on a great deal of fields from healthcare (check out our posts from last month! 😉), to finance, and even construction!

This month our theme is AI for industrial infrastructures. A large component of industrial infrastructures is the construction of physical infrastructures likes roads, bridges, sewage systems and buildings. This post seeks to tackle AI applications in the construction of buildings. Specifically, we take a deeper look into how AI and machine learning (ML) can help towards designing more sustainable homes and buildings in the future, as well as re-assessing the environmental impacts of existing buildings.

One technique in combatting the negative environmental impacts of the construction industry is to assess the impact of a project before hand, using the Life Cycle Sustainable Assessment (LCSA) approach. The latter takes into account a building’s environmental (Life Cycle Assessment, LCA), economic (Life Cycle Costing, LCC), and social (Social Life Cycle Assessment, SLCA) performance throughout the whole life cycle of a building and gives a better indication of the sustainability of a project.

With the use of an ML model (the best one might differ depending on the project), the building’s energy performance can be predicted and can further help determine the (possibly very complicated) functions for the LCA, LCC and SLCA indexes. The typically tedious and lengthy task of computing the LCSA thus becomes significantly more straightforward.

Why is this essential? This methodology allows for not only the faster assessment and rejection of projects that have unfavourable short and long-term impacts, but the quicker acceptance of better and more sustainable building designs for a greener future!

Smart Quality Inspection

AI-Based Quality Inspection for Manufacturing

Quality inspection is one of the critical processes to ensure an optimal and low-cost manufacturing system. Human-operated quality inspection accuracy is around 80%. An AI-based approach could boost the accuracy of the visual inspection process up to 99.86%. Find out how:

The Smart Quality Inspection (SQI) process consists of six stages. The first stage involves bringing the product to the inspection area from the assembly line and placing it in a designated location. A high-quality camera captures images of the item during inspection. The lighting conditions and distance from the product are adjusted based on size and camera equipment, and any necessary image transformation is done at this stage. The next stage involves using a custom Convolutional Neural Network (CNN) architecture to detect defects during the AI-based inspection. The CNN architecture can handle different types of images with minimal modifications, and it is trained on images of defective and non-defective products to learn the necessary feature representations. The defect detection model is integrated into an application used on the shop floor to streamline the inspection process. During the inspection, the operator uses the defect detection algorithm, and based on the results, a decision is made on whether to accept or reject the product. The results of the inspection process are input into the SQI shop floor application and are automatically stored in a spreadsheet. This makes it easier for the team to track and analyze the results.

Why is this essential? This technology is crucial for monitoring the manufacturing environment’s health, preventing unforeseen repairs and shutdowns, and detecting defective products that could result in significant losses.

ABBYY and Novelis Innovation Expand Partnership to Leverage Purpose-Built AI Across Europe and the US

Novelis Innovation’s momentum for deploying ABBYY purpose-built artificial intelligence (AI) solutions in Europe is expanding into the United States. 

Know more about ABBYY and Novelis partnership

Discover the press release on ABBYY’s website