
Valohai Hybrid AWS
ValohaiReviews from AWS customer
0 AWS reviews
-
5 star0
-
4 star0
-
3 star0
-
2 star0
-
1 star0
External reviews
25 reviews
from
External reviews are not included in the AWS star rating for the product.
Great niche company
What do you like best about the product?
Their customer service is amazing, prompt, responsive, and most importantly, effective
What do you dislike about the product?
Their new colors on their home screen, would prefer a dark mode way more.
What problems is the product solving and how is that benefiting you?
Helps organize and orchestrate all our machine learning operations.
Flexilibity and responsiveness for a versatile and easy-to-use tool.
What do you like best about the product?
+ Probably our most responsive vendor. Any issues are troubleshooted and queries are answered amazingly fast, which is a massive boon over open source/DIY alternatives.
+ No forced vendor lock in. Has an API and python utils that can be used when writing software but whole system is fully functional with just one .yaml that does not have to be baked in the code.
+ Flexible compute backend. Use instances from AWS, GCP, Azure with cloud storage from AWS, GCP, Azure or OpenStack Swift. On-prem solution also available.
+ Arbitrary code execution. We do a lot of pre- and post-processing and other very computationally intensive work. The same platform which allows us to create and track ML experiments is flexible enough to host a parallel Monte Carlo simulator that uses the results of those ML models.
+ No forced vendor lock in. Has an API and python utils that can be used when writing software but whole system is fully functional with just one .yaml that does not have to be baked in the code.
+ Flexible compute backend. Use instances from AWS, GCP, Azure with cloud storage from AWS, GCP, Azure or OpenStack Swift. On-prem solution also available.
+ Arbitrary code execution. We do a lot of pre- and post-processing and other very computationally intensive work. The same platform which allows us to create and track ML experiments is flexible enough to host a parallel Monte Carlo simulator that uses the results of those ML models.
What do you dislike about the product?
- API documentation could be more comprehensive.
- API key management still immature, only user-specific API-keys with no access management.
- No synergy benefits like some other commerical all-in-one solutions. The huge players have some pros in their walled gardens, including some more feature-complete solutions. With Valohai you'll need to add more pieces from other sources, such as model monitoring and labelling and other mix-match infrastructure pieces. That's more of a design choice than a negative, the flipside of the freedom and flexibility they allow. Judge by yourself what you need.
- API key management still immature, only user-specific API-keys with no access management.
- No synergy benefits like some other commerical all-in-one solutions. The huge players have some pros in their walled gardens, including some more feature-complete solutions. With Valohai you'll need to add more pieces from other sources, such as model monitoring and labelling and other mix-match infrastructure pieces. That's more of a design choice than a negative, the flipside of the freedom and flexibility they allow. Judge by yourself what you need.
What problems is the product solving and how is that benefiting you?
I believe a machine learning engineer's productiveness is mostly a function of how many experiments they can run and effectively keep track of. If you want to do impactful machine learning on an industrial scale you'll need a MLOps solution to do so. When scaling up the ML part of our analytics we evaluated the different solutions available then, including commercial and open-source solutions. We ended up with Valohai due to the freedom and flexilibity allowed by their design.
During the past few years we have used their software to spin up tens of thousands of executions on various CPU- and GPU instances, allowing us the computational power to analyse thousands of satellite images. Their software has allowed us to train multiple models in parallel while keeping track of all the inputs and outputs inside their version control system.
During the past few years we have used their software to spin up tens of thousands of executions on various CPU- and GPU instances, allowing us the computational power to analyse thousands of satellite images. Their software has allowed us to train multiple models in parallel while keeping track of all the inputs and outputs inside their version control system.
Very capable platform with great customer service
What do you like best about the product?
Valohai is very intuitive to use, which made it easy for our team to deploy and monitor models efficiently and effectively.
What do you dislike about the product?
There were a few minor quality of life features that were missing, but the Valohai team listened to our feedback anytime we discussed feature requests.
What problems is the product solving and how is that benefiting you?
They are making it easy for data science teams to develop and deploy models without all the hassle that typically comes from bespoke cloud deployment and monitoring.
Valohai is a great asset to make your MLOps experience as smooth as possible
What do you like best about the product?
- Adding tags to multiple executions at once.
- Alias for carrying outputs from one pipeline to another.
- Ease of logging
- Reusing nodes
- Traceability
- Multiple pipelines in the same YAML
- Alias for carrying outputs from one pipeline to another.
- Ease of logging
- Reusing nodes
- Traceability
- Multiple pipelines in the same YAML
What do you dislike about the product?
- Creating pipeline:
* When nodes have an alias as default input, the alias does not load automatically.
- Tags:
* Can't add tags to the triggers in the UI
* Can't add tags to the pipeline in the UI after it's created
* When nodes have an alias as default input, the alias does not load automatically.
- Tags:
* Can't add tags to the triggers in the UI
* Can't add tags to the pipeline in the UI after it's created
What problems is the product solving and how is that benefiting you?
It helps us track and trace our experiments and data. Also, it allows us to store our artifacts and give them aliases.
It allows the Data Science team to be more productive when deploying our models without worrying about cloud engineering.
It allows the Data Science team to be more productive when deploying our models without worrying about cloud engineering.
Simple yet powerful MLOps solution that drastically shortens our time to market
What do you like best about the product?
Getting started is very easy and unopinionated. You can just take the code that you already have and make it work with Valohai in a few hours and it covers the whole value chain.
What do you dislike about the product?
Nothing that I can think of right now. Valohai meets all of our needs.
What problems is the product solving and how is that benefiting you?
We run all of our MLOps on Valohai. That means data processing, model training, deployment etc which all run inside our AWS account. We focus on deep learning models so we need to provision GPU resources dynamically which Valohai is a perfect fit for.
Systematic machine learning research and orchestration
What do you like best about the product?
* Complete technology agnostic
* Catering for different expertise and commitment, from simply running an experiment in the UI, to building a pipeline with hundreds of steps programmatically
* Catering for different expertise and commitment, from simply running an experiment in the UI, to building a pipeline with hundreds of steps programmatically
What do you dislike about the product?
Nothing really disliked. I wish the flow control in pipelines, e.g. if or while loops, would be more sophisticated, but it is something that they are constantly working on.
What problems is the product solving and how is that benefiting you?
Version control all our experiments, therefore facilitating systematic research.
For our work, we need to be able to collaborate as a team, and we need to be able to reproduce our work. Valohai provides a central setting where not only experiment results are collected, but also a place where anyone else can get on top of previous work on a topic and eventually take it over.
How did a colleague train a model 6 months ago? Go look in the project
What hypothesis are you working on? Go look in the project
For our work, we need to be able to collaborate as a team, and we need to be able to reproduce our work. Valohai provides a central setting where not only experiment results are collected, but also a place where anyone else can get on top of previous work on a topic and eventually take it over.
How did a colleague train a model 6 months ago? Go look in the project
What hypothesis are you working on? Go look in the project
A very malleable ML-ops tool
What do you like best about the product?
The cost savings, we were considering building an on-prem datacenter but with valohai we get per/second charging of AWS ec2 instances and you just can't beat tho bottom line cost
The traceability, being able to trace and reproduce every production model is invaluable to us. working with consumer products it just makes sence. for our automotive customers it's a must have
The customer support, knowledgable personnel who has never let us down. And still to this date implement customer requests and suggestions
The generic way it's built. we do alot of things in valohai. not just Machine learning. like rendering or data processing. if we have code that we want to run on a cloud machine there's really no reason not to run it through valohai.
The traceability, being able to trace and reproduce every production model is invaluable to us. working with consumer products it just makes sence. for our automotive customers it's a must have
The customer support, knowledgable personnel who has never let us down. And still to this date implement customer requests and suggestions
The generic way it's built. we do alot of things in valohai. not just Machine learning. like rendering or data processing. if we have code that we want to run on a cloud machine there's really no reason not to run it through valohai.
What do you dislike about the product?
Often times we use Valohai in ways it was most likely not intended for. this means that we sometimes stumble upon undocumented behavior, rules or limitations.
it's not always clear if it's a valohai limitation or user error or bug due to no- or hard to understand error codes
this really isn't valohais fault, and we have always gotten superb support whenever these issues occur.
it's not always clear if it's a valohai limitation or user error or bug due to no- or hard to understand error codes
this really isn't valohais fault, and we have always gotten superb support whenever these issues occur.
What problems is the product solving and how is that benefiting you?
traceability, cost savings, time savings and ease of use.
Because of the gui and the easy to grasp pipelines. even the more complex tasks can be used by everyone in our team. so it's a great place to both learn and use our ML pipeline.
Because of the gui and the easy to grasp pipelines. even the more complex tasks can be used by everyone in our team. so it's a great place to both learn and use our ML pipeline.
Recommendations to others considering the product:
don't worry too much about containers or other prerequisites in valohai.
start using it and populate as you go.
start using it and populate as you go.
Easy-to-use MLOps platform
What do you like best about the product?
- Tech agnostic
- Awesome support team
- Compatible with our security requirements
- Allows our AI team to focus on developing DL models into production without heavy collaboration with DevOps
- Centralized place for all of our data science experiments, models, and metrics.
- Platform allows us to achieve our goals of automating as much as we can
- Awesome support team
- Compatible with our security requirements
- Allows our AI team to focus on developing DL models into production without heavy collaboration with DevOps
- Centralized place for all of our data science experiments, models, and metrics.
- Platform allows us to achieve our goals of automating as much as we can
What do you dislike about the product?
- There has been a very minor bug or two that was resolved quickly by communicating with their support team.
- Plotting functionalities may be minimal, but their tech agnostic approach allows you to integrate W&B to get the best of both worlds for example.
- Plotting functionalities may be minimal, but their tech agnostic approach allows you to integrate W&B to get the best of both worlds for example.
What problems is the product solving and how is that benefiting you?
- Centralized platform with a great support team for housing our deep learning experiments, pipelines, etc.
- Facilitates developing end-to-end pipelines with a small team; allows us to automate easily.
- Allows our DevOps team to focus on their work and AI team to focus on ours
- Facilitates developing end-to-end pipelines with a small team; allows us to automate easily.
- Allows our DevOps team to focus on their work and AI team to focus on ours
Recommendations to others considering the product:
They have great whitepapers and blogs to help get one acquainted with the importance and value of an MLOps platform, and even posts of comparisons between theirs and other platforms out there.
Great MLOps Platform
What do you like best about the product?
Integration with popular Cloud platforms in the market. Support for the client is very useful and guidance on how to use best practices is a great service.
What do you dislike about the product?
Local use could be a bit tricky, however they support local implementation if it's needed.
What problems is the product solving and how is that benefiting you?
We are working on telemarketing classification problems.
Huge productivity boost, easy to use. Plus it has an amazing in-person support
What do you like best about the product?
I liked the way it easily integrates different tools in a single UI that makes it easy for the user to run anything from simple scripts to notebooks to entire GitHub repos from a single Dashboard which allows you to easily keep track of the resources used and the parameters set for each run. In my case, I liked the way I could have a bird's eye view on all the training pipelines for my ML models, their performances in terms of accuracy, speed and resources needed to run. It also very smoothly integrates with our DBs and s3 buckets.
What do you dislike about the product?
So far, nothing. All the issues I had were promptly taken care of by the Valohai team so that I could run my ML tools (data harvesters, model training and data cleaning pipelines) without any major disruption.
What problems is the product solving and how is that benefiting you?
I am using Valohai to gather and preprocess data as well as train models with different architectures on different machines. I can easily run smaller instances of the same model on smaller testing machines and then scale up the training on larger dedicated nodes. I have access to all of our project's infrastructure (which is scattered among different machines) from a single place which is great and lets me set up and run every sort of experiment very quickly.
Recommendations to others considering the product:
Valohai is a great environment to develop and run your ML projects, especially if you need to access scattered resources (buckets, DBs, GPU nodes etc.) from a single, easy to use, user interface.
The support team is also very responsive and there are lots of documents and resources that can help you with all the aspects of Valohai use, so even if you are a beginner you will be able to setup and use it very quickly and effectively.
The support team is also very responsive and there are lots of documents and resources that can help you with all the aspects of Valohai use, so even if you are a beginner you will be able to setup and use it very quickly and effectively.
showing 11 - 20