Listing Thumbnail

    Accelerator for Maintenance Planning

     Info
    Deployed on AWS
    This product is meant for contextualization of SAP data for plant maintenance and planning.

    Overview

    This product retrieves SAP maintenance and planning data from S3, compresses and merges it, and then contextualizes the data to provide meaningful insights. The contextualized data is subsequently transformed into a knowledge graph using Neo4j for advanced analysis and visualization.

    Highlights

    • Knowledge Graph
    • EDA (Exploratory Data Analysis)
    • SAP Data Compression and Contextualization

    Details

    Delivery method

    Delivery option
    64-bit (x86) Amazon Machine Image (AMI)

    Latest version

    Operating system
    Ubuntu 24.04-amd64-server-20250115

    Deployed on AWS

    Unlock automation with AI agent solutions

    Fast-track AI initiatives with agents, tools, and solutions from AWS Partners.
    AI Agents

    Features and programs

    Financing for AWS Marketplace purchases

    AWS Marketplace now accepts line of credit payments through the PNC Vendor Finance program. This program is available to select AWS customers in the US, excluding NV, NC, ND, TN, & VT.
    Financing for AWS Marketplace purchases

    Pricing

    Accelerator for Maintenance Planning

     Info
    Pricing is based on a fixed subscription cost and actual usage of the product. You pay the same amount each billing period for access, plus an additional amount according to how much you consume. The fixed subscription cost is prorated, so you're only charged for the number of days you've been subscribed. Subscriptions have no end date and may be canceled any time.
    Additional AWS infrastructure costs may apply. Use the AWS Pricing Calculator  to estimate your infrastructure costs.

    Fixed subscription cost

     Info
    $23,000.00/month

    Usage costs (3)

     Info
    Dimension
    Cost/hour
    r4.4xlarge
    Recommended
    $0.00
    r5a.4xlarge
    $0.00
    r5.4xlarge
    $0.00

    Vendor refund policy

    Eligibility: Refunds may be granted for unresolved technical issues, billing errors, or cancellations within the eligible period.

    Non-Refundable: No refunds after the trial, refund window, or for misconfiguration. Subscription refunds apply only to future renewals.

    Process: Email shubham.hembade@tridiagonal.com  with details. Approved refunds are processed via AWS Marketplace. Policy may change with notice.

    How can we make this page better?

    We'd like to hear your feedback and ideas on how to improve this page.
    We'd like to hear your feedback and ideas on how to improve this page.

    Legal

    Vendor terms and conditions

    Upon subscribing to this product, you must acknowledge and agree to the terms and conditions outlined in the vendor's End User License Agreement (EULA) .

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Usage information

     Info

    Delivery details

    64-bit (x86) Amazon Machine Image (AMI)

    Amazon Machine Image (AMI)

    An AMI is a virtual image that provides the information required to launch an instance. Amazon EC2 (Elastic Compute Cloud) instances are virtual servers on which you can run your applications and workloads, offering varying combinations of CPU, memory, storage, and networking resources. You can launch as many instances from as many different AMIs as you need.

    Version release notes

    This release introduces two key features aimed at processing and structuring SAP data into a meaningful and contextualized format. The first feature, Exploratory Data Analysis (EDA), focuses on data ingestion, cleaning, and preprocessing, while the second feature constructs a Knowledge Graph in a Neo4j database from the processed data. These enhancements improve data usability and enable advanced analytics and insights.

    Feature 1: Exploratory Data Analysis (EDA) This feature is responsible for the initial processing of raw SAP data stored in Amazon S3. It performs a series of operations to enhance data quality and prepare it for further analysis. Below are the key functions of this module:

    Data Ingestion & Merging Reads raw SAP data files stored in S3. Merges relevant SAP tables to provide a holistic view of the dataset. Ensures relationships between different tables are maintained.

    Data Cleaning & Processing Identifies and removes duplicate records to improve data accuracy. Handles missing values and inconsistencies in the dataset. Applies contextual transformations to convert raw data into structured information that aligns with business logic.

    Output Generation Produces a cleaned and structured dataset that serves as input for downstream processing. Saves the processed data in a defined format for efficient retrieval and integration.

    Feature 2: Knowledge Graph Creation in Neo4j

    This feature builds on the structured data from the EDA module to generate a Knowledge Graph in a Neo4j database. The knowledge graph provides an interconnected representation of SAP data, enabling advanced analysis and insights.

    Graph Construction Reads the processed data generated by the EDA module. Defines relationships between different entities (e.g., materials, work orders, functional locations) based on predefined rules. Constructs nodes and edges to represent hierarchical and relational structures within the SAP dataset.

    Database Integration Connects to a Neo4j instance to insert structured data into the graph database. Uses efficient batch processing techniques to optimize data insertion and retrieval. Ensures data integrity and consistency within the knowledge graph.

    Graph Utilization & Insights Enables querying and visualization of SAP data relationships. Provides a foundation for running graph-based analytics and recommendations.

    Additional details

    Usage instructions

    1. Login to the EC2 instance using ssh key user name: ubuntu.
    2. Update the config.ini using below instructions, file path: /home/ubuntu/config.ini README: Configuration File Instructions

    The config.ini file configures the application's settings, including database connections, AWS credentials, S3 file paths, and other parameters.


    [aws] - AWS Credentials

    • aws_access_key_id: Your AWS access key. Example: BHKWAAIOSFODNN7EXAMPLE
    • aws_secret_access_key: Your AWS secret key. Example: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

    [s3] - Amazon S3 File Locations

    • bucket_name: Your S3 bucket. Example: my-s3-bucket
    • data_path: S3 key to data folder. Example: s3://my-s3-bucket/path/to/data/

    [data_output] - Local Data Output Path

    • data_output_path: Local directory for temporary data. Example: output/

    [material_files] - Material Data Files

    • mast: MAST.csv, mara: MARA.csv, t23: T023T.csv, makt: MAKT.csv, marc: MARC.csv, mard: MARD.csv

    [bom_files] - Bill of Materials Files

    • stpo: STPO.csv, tpst: TPST.csv

    [task_list_files] - Task List Files

    • plpo: PLPO.csv, plko: PLKO.csv, plmz: PLMZ.csv, tapl: TAPL.csv

    [functional_loc_files] - Functional Location Files

    • iflot: IFLOT.csv, iflotx: IFLOTX.csv, iloa: ILOA.csv

    [historical_work_order_files] - Work Order Data

    • afih: AFIH.csv, afvc: AFVC.csv, aufk: AUFK.csv, afko: AFKO.csv, afvv: AFVV.csv, qmel: QMEL.csv, qmih: QMIH.csv, resb: RESB.csv

    [loc_files] - Location Files

    • iflot: IFLOT.csv, iflotx: IFLOTX.csv, iloa: ILOA.csv

    [additional_files] - Additional Files

    • tapl: TAPL.csv, document: document.csv (Columns: Name, Hyperlink, EQFNR, TPLNR, etc.)

    [external_service_files] - External Service Data

    • lfa1: LFA1.csv

    [stxl] - STXL File

    • stxl: STXL.csv

    [kg] - Knowledge Graph Database

    • uri: bolt://localhost:7687, username: neo4j, password: your_password, database_name: neo4j

    [data_for_kg] - Data for Knowledge Graph Processing Ideally dont change the naming of these files as these are being generated by the program itself.

    • data_path: output/, documents_file_key: document.csv, functional_location_file: functional_loc.csv, WO_with_task_description_file: WO_with_task_description.csv, bom_and_material_with_floc: bom_and_material_with_floc.csv, task_list_with_floc: task_list_with_floc.csv, tapl: TAPL.csv

    [sections] - Sections to Process A comma-separated list of section identifiers.

    • sections: HR01,SF01,UTIL,MILL,SF03,RS02,RS03,SF02,BLDG,SF00,BLCH,RS01,PLPG,RS04,SF04,RS00,PM16

    1. Once the config file is updated, run the binary file named "main" using following command - ./main

    Tips:

    1. Ensure all fields are correctly filled to avoid application failure.
    2. Use proper AWS permissions to restrict access.
    3. Double-check S3 paths to match bucket structure.
    4. Secure Neo4j credentials and avoid exposing them publicly.

    Follow these instructions to correctly edit config.ini.

    Support

    AWS infrastructure support

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Similar products

    Customer reviews

    Ratings and reviews

     Info
    0 ratings
    5 star
    4 star
    3 star
    2 star
    1 star
    0%
    0%
    0%
    0%
    0%
    0 AWS reviews
    No customer reviews yet
    Be the first to review this product . We've partnered with PeerSpot to gather customer feedback. You can share your experience by writing or recording a review, or scheduling a call with a PeerSpot analyst.