Aivee
Your AI Assistant
Schedule a Meeting
Fill out this form and we'll get back to you shortly
How would you rate this response?
Explore opportunities to grow in advanced technology space
applying advanced technologies in process industry.
Our Core Values

Encouraging new ideas, creativity, and continuous improvement.

Putting customers' needs first and striving to exceed their expectations.

Striving for the highest standards of quality and performance.

Focusing on achieving measurable outcomes and goals.

Collaborating effectively and valuing diverse perspectives.

Being adaptable and responsive to changing market and technological trends.

Acting ethically and transparently in all business practices.
Our Core Values

.png?width=80&height=80&name=rocket-innovation-space-svgrepo-com%20(1).png)

Intelligence
.png?width=80&height=80&name=account-settings-svgrepo-com%20(3).png)


Learning Culture

Self-Motivation
latest opportunities
Engineer - IT - OT
About the Opportunity: You’ll join our growing team to help us tackle the difficult problems of deploying historians/time-series databases efficiently, connecting to data-sources reliably, monitoring and diagnosing trouble areas, and improving the product so that all these things are done faster and easier
Essential Duties and Key Competencies:
• Experienced in the Implementation of Process data Historians, such as OSI PI, Honeywell PHD, InfluxDB, or ABB-Knowledge Manager, etc
• Knowledge of firewalls, communication protocols such as TCP/IP, Profibus, profiNet, Modbus, MQTT, and others, along with networking
• Must have: OSI PI Administrator and OSI PI Developer certified
• Understanding of IoT systems and MQTT protocols
• Knowledge of port configuration for time-series historian setup
• Experienced in setting up the Kepware servers, security layers/DMZ layer
• Knowledge of establishing the stored procedure for SQL Server for the configuration of the tags
• Restful API/SDK, Connectors for SQL/No-SQL, Time-series Databases (SCADA, Process Historians)
• Querying in SQL, .json
• Knowledge of Java/JVM/JDBC/ODBC connectors would be appreciated
• Back-end troubleshooting
• Knowledge of databases and streaming data
• Well-versed with techniques that are used for accessing live data
• Setting up of databases for collecting the data in real-time
• Knowledge of time-series / sensor data
• You like both Linux and Windows. (90% of our customers use Windows)
• Fluent in networking technologies such as TCP/IP, HTTP(S), and Ethernet, with experience in deep investigation with tools like Wireshark, tcpdump, TCPView, Process Explorer, and the rest of the Sysinternals Windows power tools.
• Flexible for traveling on-demand for handling client-related concerns on installations and others
Education/Skills/Experience:
• Bachelor’s or master’s degree in computer science or similar. Relevant experience can compensate for formal education
• 3-5 years of relevant experience in a customer-facing data-intensive role
• Domain experience/knowledge of data sources within the oil and gas & manufacturing industry is a plus
• Experience with distributed computing, such as Kubernetes, and managed cloud services, such as Azure or AWS
• Have a DevOps mindset, and experience with Git, CI/CD,and deployment environments
• Enjoys working in cross-functional teams
• Able to independently investigate and solve problems
• Humility to ask for help and enjoy sharing knowledge with others
Engineer - Data Ops - Cloud
About the Opportunity: For our DataOps group, you will help in ingesting the industrial data (IT, OT, ET) in the cloud (AWS, Azure) and develop the contextualization layer to develop the relationship among the datasets and create the asset hierarchy. We are looking for professionals who have strong data engineering skills, including data transformation, SQL, ETL/ELT to ingest different forms and shapes of data.
Essential Duties and Key Competencies:
• Strong data engineering experience in Azure or AWS
• Strong experience in Azure Databricks, Fabric, and Synapse Analytics
• Experience in migrating, managing data from SAP S4 Hanna (on-premises/Cloud)
• Experience in building connectors for SAP (PM, MM, QM, FICO)
• Contribute to the design and implementation of data architecture solutions for cloud
• Configuring the data extraction for various sources
• Experience in designing data pipelines for batch and real-time streaming
• Strong data engineering experience (SQL, T-SQL, No-SQL) for data transformation and data contextualization (Azure, AWS)
• Strong understanding of knowledge graphs – Amazon Neptune, Neo4j, Azure CosmosDB
• Strong Python experience
• Strong understanding of IT, OT, and ET data
Education/Skills/Experience:
• Bachelor’s or master’s degree in computer science or similar. Relevant experience can compensate for formal education
• 3-5 years of relevant experience in a customer-facing data-intensive role
• Domain experience/knowledge of data sources within the oil and gas & manufacturing industry is a plus
• Experience with distributed computing, such as Kubernetes, and managed cloud services, such as Azure or AWS
• Have a DevOps mindset, and experience with Git, CI/CD,and deployment environments
• Enjoys working in cross-functional teams
• Able to independently investigate and solve problems
• Humility to ask for help and enjoy sharing knowledge with others
Engineer - Data Ops - SAP Data Engineer
About the Opportunity: For our group - You will coordinate the building, managing and maintaining a repeatable data load and migration process for SAP implementations and work with the client data lead in managing the Data Migration team responsible for identifying, extracting, cleaning, mapping, loading both the master data sets & transactional data sets from multiple sources into SAP system. Moreover, the requirement includes strong knowledge of Cognite data fusion, which would include data migration from SAP to Cognite and contextualization of the data in Cognite. The engineer is supposed to work on both SAP and Cognite.
Essential Duties and Key Competencies:
• Thorough experience in SAP PM Module and related integration modules like SAPMM, SAP FICO, SAP PS, and Q, M & Different legacy systems
• General SAP Understanding across the modules & Critical master data like material master, BOMs, etc • Worked on CU (Compatible Unit). Performed different roles, unit test, integration test, and test plan execution to user sign off.
• Worked extensively on master data migration and cut-over activities.
• Involved in implementation and application maintenance for the SAP PM Module.
• Experience in different order processes like Preventive, Breakdown, Calibration, Refurbishment, and project-related orders
• Experience in roll-out project with Interface, Enhancement, Reports, Data conversion, Forms & Workflow for SAP PM Module solution design and roll-out.
• Project experience in Implementation & Application Maintenance services.
• Worked on different phases of implementation from business requirement collection, project analysis, process design, blueprints, configuration, data loads, unit testing, integration testing, and production support.
• Expert experience in the creation of documentation for PM and related Modules, and also had good experience in creating training material for end users.
• Manual Transformation of Data from Legacy System to SAP loadable values
• Manual Load of Data from Legacy System into SAP system
• Execution of automated data loads via SAP or other automated tools.
• Working with Functional Consultants to verify data and troubleshoot/correct errors.
• Accountable for all the data loads into the SAP system
• Migrating and managing master data, bills of material, and maintenance plans for these technical systems
• Migrating the data from SAP PM – maintenance history, equipment data, and other tables related to production and maintenance to Cognite
• Python backend development
• Proficient in PostgreSQL
• Experience with cloud development and setting up KG databases in the cloud (e.g., Azure, AWS) is a strong plus
Education/Skills/Experience:
• Master/Bachelor’s Engineering degree in Computer/Electronics/IT/Chemical/Mechanical or MSc. in Statistics with 3-5 years of relevant work experience
Intern - Generative AI
About the Opportunity: We are looking for dynamic and passionate interns with a keen interest in Generative AI. As a Generative AI Intern, you will work alongside our Process Data Analytics team to develop and implement AI models that enhance our solutions for the process industry. This role is ideal for individuals who are eager to apply their skills in AI/ML to real-world challenges, particularly in the fields of big data processing, data-driven modeling, and analytics.
Essential Duties:
• Strong coding skills using python, and its associated IDEs
• Strong understanding of LLMs, and knowledge graphs
• Good with prompt engineering skills and model troubleshooting to improve the performance
• Assist in the design and development of Generative AI models using frameworks such as TensorFlow, PyTorch, or similar.
• Work on data aggregation, cleansing, and advanced performance modeling using ML/AI frameworks.
• Collaborate with the data science team to develop Proof of Concept (POC) models and applications.
• Participate in the automation of workflows using AI to optimize processes for industry-specific use cases.
• Conduct research and stay updated on the latest advancements in AI/ML to contribute innovative solutions.
• Assist in the development and maintenance of documentation related to AI models and workflows.
• Support the deployment and scaling of AI models on cloud platforms such as AWS or Azure.
• Engage in troubleshooting and fine-tuning models to enhance accuracy and performance.
• Contribute to client projects by providing analytical support and insights derived from AI models.
• Participate in training sessions and workshops to understand the full lifecycle of AI model development and deployment.
Key Competencies:
• Strong foundation in machine learning, deep learning, and generative models.
• Proficiency in programming languages such as Python, with experience in ML libraries (e.g., TensorFlow, PyTorch).
• Basic understanding of cloud computing platforms (AWS, Azure) and their AI/ML services.
• Analytical mindset with the ability to solve complex problems using data-driven approaches.
• Eagerness to learn and adapt to new tools, techniques, and industry trends.
• Strong communication skills, with the ability to present technical concepts clearly.
• Ability to work effectively in a collaborative, team-oriented environment.
Education/Skills/Experience:
• Currently pursuing or recently completed a Master’s/Bachelor’s degree in Chemical Engineering, Computer Science, Engineering, Data Science, or a related field.
• Prior experience or coursework in AI/ML, data science, or related areas is a plus.
Engineer - Generative AI
About the Opportunity: We are seeking talented and passionate Generative AI engineers to join our team and focus on cutting-edge applications of Generative AI, Agentic AI, and Knowledge Graphs. The role demands strong expertise in AI/ML frameworks, NLP, and graph-based reasoning to create transformative solutions for clients across domains
Essential Duties:
AI Solution Development:
• Build and deploy Generative AI models for text, image, and multimodal applications.
• Develop Agentic AI systems capable of autonomous reasoning and decision-making.
• Design and implement Knowledge Graphs for complex data relationships and insights.
Advanced Analytics and Modeling:
• Apply NLP, computer vision, and deep learning techniques to solve real-world challenges.
• Work on fine-tuning and deploying Large Language Models (LLMs) for specific use cases.
• Integrate graph-based algorithms with AI models to enhance reasoning and inference.
Client-Focused Development:
• Collaborate with clients to understand their challenges and design tailored AI solutions.
• Present technical findings and insights to non-technical stakeholders effectively.
Data Engineering and Preparation:
• Work with large datasets, perform data aggregation, cleaning, and feature engineering.
• Utilize graph databases (Neo4j, TigerGraph) to design and query Knowledge Graphs.
Innovation and Research:
• Stay updated with the latest advancements in Generative AI, Knowledge Graphs, and Agentic AI.
• Experiment with emerging tools, frameworks, and methodologies to drive innovation.
Model Deployment and Optimization:
• Deploy models in cloud environments (AWS, Azure, GCP), ensuring scalability and efficiency.
• Monitor and optimize models for performance, interpretability, and reliability.
Cross-Functional Collaboration:
• Work closely with teams across engineering, data, and business functions to ensure seamless delivery.
• Support integration of AI systems with enterprise applications and workflows.
Key Competencies:
• Generative AI: Experience with frameworks like GPT, BERT, Stable Diffusion, and multimodal AI models.
• Agentic AI: Expertise in building autonomous agents for decision-making and interaction.
• Knowledge Graphs: Proficiency with graph databases (Neo4j, TigerGraph) and graph query languages (Cypher, SPARQL).
• NLP and AI/ML: Deep knowledge of NLP, machine learning, and AI tools (TensorFlow, PyTorch, Hugging Face).
• Programming: Strong coding skills in Python, with experience in libraries like scikit-learn, NetworkX, and Pandas.
• Data Engineering: Hands-on experience in data pipelines, ETL, and working with graph-based data.
• Cloud and DevOps: Familiarity with cloud platforms (AWS, Azure, GCP) and deployment tools like Docker and Kubernetes.
• Problem Solving: Ability to translate complex technical challenges into actionable solutions.
• Research Orientation: Passion for exploring and applying cutting-edge AI and graph-based approaches.
• Communication: Strong written and verbal skills to present complex ideas to diverse audiences.
Education/Skills/Experience:
• Master’s/Bachelor’s degree in Chemical Engineering, Computer Science, Engineering, Data Science, or a related field.
• 1-3 years of experience in relevant experience
• Prior experience or coursework in AI/ML, data science, or related areas is a plus.
Cloud - Intern
About the Opportunity: We are seeking young engineers who are keen and interested to learn and implementing cloud practices for our global customers
Essential Duties:
• Aware of cloud services (AWS & Azure)
• Certified cloud architect
Education/Skills/Experience:
• Master's/Bachelor’s Engineering degree preferred
Engineer - Cloud
About the Opportunity: We are seeking experienced and innovative cloud professionals to innovate and deliver projects in our Azure-focused cloud initiatives. This role requires deep technical expertise in Azure architecture and services, strong leadership abilities, and a passion for delivering scalable, secure, and efficient cloud-based solutions to our clients.
Essential Duties:
• A certified Azure Infrastructure professional
• Experience with Azure IoT services, Data Lake, VMs,and Networking
• Design, configure, and manage Azure IaaS and PaaS services (VMs, VNets, NSGs, Storage, etc.)
• Set up resource groups, role-based access control (RBAC), Azure Policies, and naming conventions.
• Automate infrastructure provisioning using Infrastructure as Code (IaC) tools like ARM Templates, Bicep, or Terraform.
• Manage hybrid environments via Azure Arc, VPNs, and ExpressRoute.
• Implement cost optimization strategies using Azure Cost Management, budgets, and resource tagging.
• Design and maintain CI/CD pipelines using Azure DevOps Pipelines or GitHub Actions for both infrastructure and applications.
• Integrate unit tests, code quality checks, security scans, and deployment validations in pipelines.
• Manage pipeline environments, approvals, secrets, and artifact versioning.
• Implement blue-green, canary, or rolling deployments for high availability.
• Collaborate with developers to containerize applications using Docker and deploy with AKS or Azure App Services.
• Set up DevOps toolchains (Git, Azure Repos, Boards, Pipelines, Artifacts).
• Champion DevSecOps practices by integrating security early in the development lifecycle.
• Drive shift-left testing, infrastructure testing, and automated validations.
• Enforce Git branching strategies and environment-specific configurations.
• Enable self-service provisioning for dev/test environments using IaC templates.
• Implement Azure Monitor, Log Analytics, and Application Insights for observability.
• Configure alerts, dashboards, and workbooks for proactive monitoring.
• Set up Service Health alerts and Action Groups for incident response.
• Ensure backup, recovery, and disaster recovery using Azure Backup and Site Recovery (ASR).
• Conduct routine health checks, performance tuning, and SLA compliance audits.
• Deploy workloads like web apps, APIs, containers, serverless functions, and databases.
• Manage dependencies, configurations, scaling policies, and certificates/secrets during deployment.
• Use Deployment Slots, Feature Flags, and App Config for controlled rollouts.
• Validate deployments with post-deployment gates, smoke tests, and rollback mechanisms.
• Coordinate with stakeholders to align deployments with business timelines and change windows.
• Analyze existing infrastructure and recommend cost-effective, scalable, and secure architecture patterns.
• Refactor legacy workloads to cloud-native solutions (e.g., Functions, AKS, Logic Apps).
• Implement network and security hardening using Firewall, NSGs, Private Endpoints, and Key Vault.
• Apply Well-Architected Framework and Cloud Adoption Framework principles.
• Promote governance, compliance, and an automation-first approach.
• Azure CLI, PowerShell, ARM, Bicep, Terraform
• Azure DevOps, GitHub Actions
• Monitoring tools: Azure Monitor, Application Insights, Log Analytics
• Networking, security, IAM (RBAC, Conditional Access)
• Containers: Docker, Kubernetes (AKS)
• Windows/Linux server administration
Education/Skills/Experience:
• Master/Bachelor’s Engineering degree with 3-5 years of relevant work experience
Intern - Full Stack
About the Opportunity: For our Data Science and Analytics Practice, we are looking for interns who will be responsible for designing, developing, and maintaining full-stack web applications
Essential Duties:
• Experience in developing modern, responsive, and cross-browser-compatible websites using HTML, CSS, and JavaScript
• Strong understanding of writing backend APIs using Python
• Experience with Next.js will be an added advantage.
Education/Skills/Experience:
• Bachelor’s Engineering degree in Computers/Electronics/IT/Chemical 1 year of relevant work experience
Engineer - Full Stack
About the Opportunity: We are seeking experienced and innovative cloud professionals to innovate and deliver projects in our Azure-focused cloud initiatives. This role requires deep technical expertise in Azure architecture and services, strong leadership abilities, and a passion for delivering scalable, secure, and efficient cloud-based solutions to our clients.
Essential Duties:
For our Data Science and Analytics Practice, we are looking for young and dynamic engineers with good experience in designing and implementing beautiful, modern and attractive interactive UI/Web Applications for analytical solutions.
Education/Skills/Experience:
- Experience in developing modern, responsive, and cross-browser-compatible websites using HTML, CSS, and JavaScript
- Experience in Tailwind, Bootstrap,p and Material UI
- Strong understanding of writing APIs using Node or Python
- Experience with Next.js will be an added advantage.
Intern - Metals-Mining/Cement
About the Opportunity: For our Data Science and Analytics Practice, we are looking for strong industry personnel having a good understanding of the Python Framework and data-driven modelling techniques using AI/ML algorithms and statistical techniques to identify the correlations and trends in the data.
Essential Duties:
• Eager to explore new technologies, learn, and explore new industries
• Hands-on academic or other experience in Python libraries and their framework.
• An Engineer with comprehensive engineering, mathematical, statistical, and analytical skills
• Comes with strong communication and leadership skills.
• Hands-on experience in AI/ML modelling techniques on the Python language.
• Good at mathematics and statistics analysis to identify the correlations and insights from the data
Education/Skills/Experience:
Data Scientist - Metals & Mining
About the Opportunity: For our Data Science and Analytics Practice, we are looking for strong industry personnel having a good understanding of the operations and processes within the Metals, mining & cement domain, along with an understanding of the analytics and modeling practice using the best of data driven techniques.
Essential Duties:
• Strong knowledge of Data Science (ML-AI) frameworks using Python, R, MATLAB, etc.)
• Minimum 2 years of practical experience in AI/ML-based prediction, forecast, and optimization model delivered to client (1st principle modelling optional)
• Developed the end-to-end AI/ML-based solutions successfully and practical implementation of techniques.
• Eager to explore new technologies, learn, and explore new industries
• An Engineer with comprehensive engineering, mathematical, statistical, and analytical skills
• Knowledge and understanding of the process parameters (Sensors), IOT, and mechanical parameters required to develop the models for the above areas.
Additional Skills (Optional):
• Understanding Process integration in metals & mining, cement industry (Steel, aluminium in Manufacturing is preferable)
• Certification in Azure, AWS, or GCP machine learning platforms
• Understanding of cloud-based ML/AI architecture and deployment strategies
• Understanding of MLOps and its best practices
• Experienced with MLflow, Jenkins, and others
• Knowledge of SQL, No-SQL, and time-series databases
Intern - Data Ops - Cloud Data Engineer
About the Opportunity: For our DataOps group, you will help in ingesting the industrial data (IT, OT, ET) in the cloud (AWS, Azure) and develop the contextualization layer to develop the relationship among the datasets and create the asset hierarchy. We are looking for professionals who have strong data engineering skills, including data transformation, SQL, ETL/ELT to ingest different forms and shapes of data.
Essential Duties:
• Strong knowledge of data engineering and its concepts
• Knowledge of ETL pipelines, data warehouse, data lake, and database
• Assist in designing and developing scalable data pipelines and ETL/ELT workflows
• Support data integration from various sources, including structured, semi-structured, and unstructured data
• Familiarity with Python or Scala for data manipulation and scripting
• Basic knowledge of cloud platforms (e.g., Azure, AWS, or GCP) for data ingestion, storage, transformation, and processing
• Contribute to data cleaning, transformation, and preparation for analysis
• Develop dashboards and reports using Power BI/Grafana to support business decisions
• Perform exploratory data analysis (EDA) and support data-driven insights
• Ensure data quality, consistency, and integrity across systems
• Collaborate with data analysts, data scientists, and other stakeholders to understand data requirements.
Good to have:
• Exposure to big data processing or streaming platform
• Understanding of a database
• Experience with version control systems like Git/GitHub
• Hands-on with Jupyter Notebooks or VS Code for data development
• Awareness of DevOps/DataOps concepts
• Basic knowledge of data governance, metadata management, or contextualization, and data modeling
• Experience with containerization tools like Docker
• Awareness of machine learning basics and integration with data pipelines
• Familiarity with Agile/Scrum methodologies in data teams
Education/Skills/Experience:
• Bachelor’s degree in Computer Science, Information Technology, or related field
• Basic knowledge of ETL processes, data pipelines
• Basic understanding of cloud computing concepts
• Familiarity with databases (like SQL and NoSQL) and data modeling
• Hands-on experience in Python for data manipulation and scripting
• Exposure to Power BI or other data visualization tools
• Eagerness to learn and grow in the data engineering and data analytics domain
• Basic knowledge of digital transformation, industry 4.0, IOT/IIOT
Intern - Data Ops - ITOT
About the Opportunity: You’ll join our growing team to help us tackle the difficult problems of deploying OT, historians/time-series databases efficiently, connecting to data-sources reliably, monitoring and diagnosing trouble areas, and improving the product so that all these things are done faster and easier.
Essential Duties:
- Contribute to project activities such as implementation, configuration, and integration of SCADA, edge computing,g and cloud-based data solutions
- Support IT/OT integration efforts for industrial OT systems (PLC / DCS, SCADA), L2 MES, ERP IT systems with SCADA, EDGE, and cloud systems
- Understand and work on SCADA HMI/dashboard development, custom applications, and process workflow
- Be a team member in the delivery of IT/OT and DataOps projects
- Assist in understanding network architecture and documenting the architecture between plant floor devices and enterprise systems
- Work on data connectivity, integration, and visualization using modern platforms
- Participate in configuring and testing industrial protocols (OPC, Modbus, MQTT, etc.)
- Collaborate with cross-functional teams, including automation, IT, and data science teams
- Help with basic scripting (Python) and database tasks (SQL) for data acquisition and processing
Education/Skills/Experience:
- Degree in Instrumentation Engineering or Electrical Engineering
Meet Our Team
“Since joining Tridiagonal Solutions as a Process Engineer through VIT Pune's Campus Recruitment Drive - 2022, my role has been consistently fulfilling. Specializing in Operator Training Simulation, Tridiagonal has sharpened my technical skills and nurtured leadership, empathy, and client communication. We prioritize inclusivity, teamwork, and collective success, fostering reliability and belonging. With robust infrastructure and supportive colleagues, seamless execution is ensured. Continuous knowledge sharing and collective progress drive everyone's involvement. Employee engagement programs and monthly activities enhance professional development and personal well-being. Tridiagonal isn't just a workplace; it's a platform for growth, innovation, and shared success, fostering a thriving and supportive community.”
.webp?width=1330&height=2000&name=Aftab%20Sharif%201%20(1).webp)
“I want to express my gratitude for the progressive and enthusiastic environment fostered here. It's truly inspiring to be part of such a supportive team that not only encourages individual growth but also provides opportunities for on-site learning. The enthusiasm within the team during this learning phase is infectious, making every day a rewarding experience. I appreciate the progressive mindset and supportive approach that make this company a stimulating place to work. Looking forward to continuing this journey with such an enthusiastic team.”
.webp?width=1330&height=2000&name=Shital%20Pawar%201%20(1).webp)
“With a unique blend of chemical background and interest in data science, my role at Tridiagonal has evolved to encompass diverse projects, fostering professional growth. The company culture is supportive and nurturing, with a collaborative environment that encourages learning and skill development. Teamwork in my department is excellent, ensuring smooth operations and proactive issue prevention. Tridiagonal provides valuable support, allowing flexibility for learning and success in my role. Though early in my time here, I'm excited for the future and eager to create memorable experiences while expanding my knowledge and skills.”

“During my M.Tech program, I discovered Tridiagonal Solution and was captivated by their innovative approach to data science and engineering solutions. This led to an internship, which evolved into a senior data scientist position. Tridiagonal's culture promotes collaboration, innovation, and continuous learning, offering ample growth opportunities and a supportive environment. Teamwork is exceptional, emphasizing knowledge sharing and mutual respect. Guidance from experienced colleagues and extensive training programs have been invaluable for my development. Tridiagonal provides diverse project opportunities and a supportive culture for newcomers, fostering both personal and professional growth. Working here has significantly boosted my skills and confidence. I plan to launch a YouTube channel, "Life at TSPL," to share our experiences.”

“In my role at Tridiagonal Solutions, I focus on lead generation, utilizing data for market expansion. Over the past year, I've experienced significant growth, shouldering more responsibilities and contributing to impactful projects. Tridiagonal's collaborative culture emphasizes continuous learning, with transparent leadership urging us to surpass expectations. Teamwork is paramount, fostering cooperation and support. For recent graduates, Tridiagonal provides a nurturing environment for learning and mentorship, ideal for launching a career. Their dedication to employee development and collaboration is commendable. Considering joining Tridiagonal? It's an excellent opportunity for professional and personal growth in a supportive setting.”

“With a background in Advanced Process Control for optimizing petroleum refineries, I was attracted to Tridiagonal's innovative approach to AI-driven optimization. Since joining, my role has evolved significantly; although initially lacking AI experience, I received support from colleagues and now manage projects autonomously. Tridiagonal's culture is progressive, prioritizing client value and employee development. Communication and idea-sharing within the team are regular, fostering mutual trust. Overall, expert guidance, a collaborative environment, feedback, recognition, and work-life balance at Tridiagonal have been instrumental in my success and contribution to company objectives.”

I was initially drawn to Tridiagonal for the opportunity to work on cutting-edge industrial projects involving process simulation, control systems, and operator training. My journey began with hands-on experience in DCS activities, system integration, FAT, and plant startup support. These experiences gave me a strong foundation in automation and control. Over time, my role has evolved to include process modeling, dynamic simulation, and OTS development, allowing me to contribute to both technical delivery and project execution. The continuous learning environment and exposure to real plant operations have helped me grow both technically and professionally.
Collaboration and teamwork in our department feel very natural and well-balanced. Everyone brings their strengths to the table, and there’s a genuine sense of mutual respect and support. Whether it’s meeting tight deadlines or solving a complex issue, we work together efficiently and learn from each other along the way.

Explore Our Culture and People






MixIT ROI Calculator
Typical number of Scale-Ups you do in a year (all intermediate and final product steps) | |
Cost of Each Scale-Up Failure | |
Total Savings | $1,050,000 |
# of mixing processes requiring additional excess of reagents/purification in a year | |
Aggregate cost of additional chemicals over all such processes over year | |
Experimental analysis (additional samples drawn) in a year | |
Cost per excess reagent (experimental) analysis | |
Total Savings | $440,000 |
# Pilot Scale-up batches per year | |
Average cost per kg of raw materials per batch | |
% Reduction in pilot batches using MixIT- direct scale-up from lab to plant | |
Total Savings | $280,000 |
# operating days per year | |
Average batch time (hours) | |
# batches per year (currently) | |
Reduction in batch cycle time (hours) with MixIT | |
Potential number of additional batches per year | |
Value of each batch (profit per batch) | |
Total Savings | $480,000 |
# Number of dedicated reactor designed per year | |
# Number of newly commissioned reactors which needs agitator/dip tube/baffle design change | |
# weeks needed for reactor modification | |
# opportunity loss due to lost time (profits in USD per week) | |
Total Savings | $800,000 |
Right First-time Scale-Up | $1,050,000 |
Reduction in Excess Reagent | $440,000 |
Scale-Up Cost | $280,000 |
Manufacturing Throughput | $480,000 |
Right first-time equipment indent | $800,000 |
Total Annual Savings using MixIT | $3,050,000 |
You save $3,050,000 per year
on a MixIT investment of $30,000
Note:-
* assumption - 30% scale-up failures and 50% of failed batches are due to improper mixing scale-up
** IT is assumed that 40% savings in reagents & analysis cost based on MixIT optimization
** Assumption- Two additional samples- 100 batches per product
** Avg cost of a sample taken as $100
Disclaimer:-
1) The above calculations are done using standard industry values. The actual savings may vary.
2) This report is for informational purposes. Tridiagonal Software shall not make any warranty as to the difference in actual results and the report
3) By clicking on the submit button you choose to share your information with Tridiagonal Software and authorize them to contact you for marketing purposes