Any machine learning mastermen out there??
| orange pit sound barrier | 05/13/18 | | passionate point | 05/13/18 | | orange pit sound barrier | 05/13/18 | | Cracking Regret | 05/13/18 | | orange pit sound barrier | 05/13/18 | | Cracking Regret | 05/13/18 | | orange pit sound barrier | 05/13/18 | | orange pit sound barrier | 05/13/18 | | Cracking Regret | 05/13/18 | | orange pit sound barrier | 05/13/18 | | Cracking Regret | 05/13/18 | | orange pit sound barrier | 05/13/18 | | bonkers aromatic parlour personal credit line | 05/13/18 | | orange pit sound barrier | 05/13/18 | | bonkers aromatic parlour personal credit line | 05/13/18 | | Cracking Regret | 05/13/18 | | Stimulating twinkling uncleanness | 05/13/18 | | orange pit sound barrier | 05/13/18 | | Stimulating twinkling uncleanness | 05/13/18 | | Cracking Regret | 05/13/18 | | orange pit sound barrier | 05/13/18 | | Stimulating twinkling uncleanness | 05/13/18 | | bonkers aromatic parlour personal credit line | 05/13/18 | | orange pit sound barrier | 05/13/18 | | bonkers aromatic parlour personal credit line | 05/13/18 | | orange pit sound barrier | 05/13/18 | | bonkers aromatic parlour personal credit line | 05/13/18 | | orange pit sound barrier | 05/13/18 | | bonkers aromatic parlour personal credit line | 05/13/18 | | Cracking Regret | 05/13/18 | | Chocolate boyish ape | 05/15/18 | | bonkers aromatic parlour personal credit line | 05/15/18 | | orange pit sound barrier | 05/29/18 | | Cracking Regret | 05/13/18 | | orange pit sound barrier | 05/13/18 | | Cracking Regret | 05/13/18 | | Stimulating twinkling uncleanness | 05/15/18 | | orange pit sound barrier | 05/13/18 | | Cracking Regret | 05/13/18 | | orange pit sound barrier | 05/13/18 | | bonkers aromatic parlour personal credit line | 05/13/18 | | Cracking Regret | 05/13/18 | | orange pit sound barrier | 05/13/18 | | orange pit sound barrier | 05/13/18 | | bonkers aromatic parlour personal credit line | 05/13/18 | | Cracking Regret | 05/13/18 | | bonkers aromatic parlour personal credit line | 05/13/18 | | idiotic light stock car | 05/13/18 | | Fear-inspiring Swashbuckling Clown | 05/13/18 | | Cracking Regret | 05/14/18 | | orange pit sound barrier | 05/15/18 | | Territorial milky gaming laptop | 05/15/18 | | orange pit sound barrier | 05/15/18 | | Territorial milky gaming laptop | 05/19/18 | | orange pit sound barrier | 05/21/18 | | orange pit sound barrier | 06/08/18 | | Stimulating twinkling uncleanness | 06/08/18 |
Poast new message in this thread
Date: May 13th, 2018 5:31 PM Author: orange pit sound barrier
Here's where I'm confused
Machine learning models are very domain specific - one company's data can be used to train a model, but that model won't generalize to other companies' data, etc
How does one scale software using AI then?
How could you create a software which takes a company's data, learns from it to make predictions, and then sell a software package to another company which does the same thing, just for that company's data?
(http://www.autoadmit.com/thread.php?thread_id=3975999&forum_id=2#36042817) |
|
Date: May 13th, 2018 6:15 PM Author: orange pit sound barrier
There's a few in fintech that look interesting
Also some in the sales and marketing space that look very interesting. That's the area I'm interested in
Why do you think most will go down in flames?
(http://www.autoadmit.com/thread.php?thread_id=3975999&forum_id=2#36042972) |
|
Date: May 13th, 2018 6:24 PM Author: Stimulating twinkling uncleanness
a lot of the ideas seem silly and/or will be done much better by large tech companies with more resources. things like this:
https://www.offworld.ai/
even if data wasn't an issue, computational resources are and companies like Google will have a major advantage. AlphaZero was an obvious idea and could have been invented by many people if they had access to a few thousand TPUs like Google. the same goes for other research in this field.
(http://www.autoadmit.com/thread.php?thread_id=3975999&forum_id=2#36043012) |
|
Date: May 13th, 2018 6:41 PM Author: bonkers aromatic parlour personal credit line
If you're aiming high, future opportunities would be largely built around your ability to succeed as a startup founder and ride the wave of AI hype, which is a mix of being able to raise lots of money, tons of marketing, fraud and other unethical behavior, etc.
A more realistic approach would be to solve some narrow subset of AI/ML problems that a bigger company can't figure out due to corporate bureaucracy, dysfunctional management, good engineers jumping ship, etc., and then having that company acquire your company.
(http://www.autoadmit.com/thread.php?thread_id=3975999&forum_id=2#36043075) |
Date: May 13th, 2018 8:36 PM Author: orange pit sound barrier
So just to sum up: AI software is fundamentally different from other types of software. I can create an accounting software and sell it to millions of users. I can scale that business.
I can't do the same thing for AI software, because machine learning is not explicitly programmed. Rather, it learns from data, which is necessarily going to vary between industries and companies. There is no one size fits all solution for it.
(http://www.autoadmit.com/thread.php?thread_id=3975999&forum_id=2#36043645) |
Date: May 15th, 2018 2:30 PM Author: Territorial milky gaming laptop
Actual person with ML experience here.
ML is the process of "training" a model using an algorithm/framework and data. This results in a model that can help you make inferences or decisions given a question (should we give this person a loan? Is this an example of fraud?).
Applying the model to solve real world problems is what gives you AI. You can build software with a model that makes decisions or provides insight given data in a standardized format. For example, take a look at the Rekognition API from Amazon for image recognition.
SO I think you're making an error in assuming that all machine learning models are highly specific to a certain problem or organization. Algorithms are very specific to training certain types of models, is probably what you're thinking of.
(http://www.autoadmit.com/thread.php?thread_id=3975999&forum_id=2#36055928) |
|
Date: June 8th, 2018 8:02 AM Author: Stimulating twinkling uncleanness
Gary Marcus has been pushing this for a while now. He thinks the current path in machine learning is fundamentally flawed, as it focuses strictly on learning and places little to no value on innate knowledge humans use to guide decision making.
I think he is wrong for several reasons. There are pretty strict limits on the amount of data encoded in the human genome. From that base number, you have to remove everything that codes for basic cellular functions, body architecture, etc. Even genes that are strictly active in the brain will often code for things that we don't care about when engineering intelligence, such as what we find rewarding.
Even assuming what is left over is a few megabytes of data that provides priors to bias human learning, this doesn't mean we need that knowledge to replicate the functionality of our brains. It could simply be a shortcut to speed human development. The reality is that we are able to push far more data through ML systems than any human will ever be exposed to in a normal lifetime. You can have very weak priors in that situation, which is why DL systems work even though they don't resemble the brain in any substantive way.
The limitations of Google Duplex and other chatbots are not due to some inherent weakness in the learning based approach. These systems need richly detailed world models from unsupervised learning. They don't actually understand the world yet, which is why they are inflexible and require large specialized data sets to work in limited domains. I really doubt this will be a problem in a few more years.
(http://www.autoadmit.com/thread.php?thread_id=3975999&forum_id=2#36206001) |
|
|