We have already explored a lot about GPT-3. Now let’s look into the terms that it’s built up of. So today, we'll start by first understanding generative models.
Generative modelling is a branch of statistical modelling. It is a method for mathematically approximating the world.
There are two kinds of statistical models:
Let’s take a closer look at them.
Generative models can generate new data. Discriminative models discriminate between different kinds of data.
But what does that mean in practice?
A generative model can generate new photos of animals that look like real animals, while a discriminative model can differentiate a dog from a cat.
We are surrounded by an incredible amount of easily accessible information — both in the physical world and the digital one.
The tricky part is to develop intelligent models and algorithms that can analyze and understand this treasure trove of data. Generative models are one of the most promising approaches to achieving this goal.
To train a generative model, we need to first collect a dataset. This dataset should be a collection of examples that helps the model learn to perform a given task.
Usually, a dataset used to train such models is a large amount of data in some specific domain: like millions of images of cars to teach a model what a car is. Datasets can also take the form of sentences or audio samples.
Once you have shown the model many examples, the next thing to do is train it to generate similar data.
AI already creates software, hardware is next. Several companies like Circuitmind, Cells and JITX are starting to use AI to do hardware design.
Voxels vs Polygons I like this paper on generating 3D voxels based objects: https://alexzhou907.github.io/pvd As compared to polygon based models, I think voxels are a more accurate way of modeling actual 3D objects. Also seems closer to how 3D printing would work.