Commit 7b490ea2 authored by pbethge's avatar pbethge
Browse files

update README

parent 4ddbdd5c
# Keywordspotting
This example uses code from an
This example builds upon code from
[this repo](https://github.com/douglas125/SpeechCmdRecognition)
to spot keywords in audio signals.
......@@ -16,8 +16,8 @@ trained neural network. In Python we reconstruct the model, load the weights and
export the model as _SavedModel_.
After loading the .h5 file we wrap the call to the model. This allows us to
change the signature of the _SavedModel_ and to specify `training=False`. The
later is very important as some layers (such as Dropout) act different during
training and will not get correctly initialized in C++.
later may be necessary as some layers (such as Dropout) act different during
training.
```python
@tf.function(input_signature=[tf.TensorSpec([None, None], dtype=tf.float32)])
......@@ -28,12 +28,11 @@ model.save('../bin/data/model', signatures={'serving_default': model_predict})
```
***Note***: besides downsampling all preprocessing is done inside the
computational graph. Wow! Thanks to the pyhton package _kapre_.
computational graph. Wow! Thanks to the python package _kapre_.
***Note***: check out train.py if you want to learn more about the training
process.
### openFrameworks
Since the neural network was trained on 1 seconds long audio files sampled at
16kHz we will need to assure the same effective sampling rate and cut the audio
......
......@@ -4,9 +4,7 @@ This example explores image generation using neural networks and is built around
A basic GAN consists of two parts a generator that takes in an input and generates a desired output (here: in both cases images) and a classifier that tries to predict if its input was generated or not. While the classifier is trained in a classic manner on real and fake samples the generator is trained _through_ the classifier. That is, its update depends on the output of the classifier when given a newly generated sample. A training step includes training both parts side by side.
### TensorFlow2
As with all examples you can download the pretrained model from the assests, copy it to the bin/data folder and name it model.
If you want to train it by yourself you can edit the config.py file in the python folder and run main.py. It is highly recommended to use a GPU for this purpose.
If you want to train the neural network by yourself you can edit the config.py file in the python folder and run main.py. It is highly recommended to use a GPU for this purpose.
Check this [post](https://www.tensorflow.org/tutorials/generative/pix2pix?hl=en) for more information on the training procedure.
......@@ -57,7 +55,7 @@ In this example we use this pattern but also define a subclass with a specialize
However, it is very important to reconstruct the way data is treated during the training, which does not include ways that enhance generalization such as noise and image augmentation.
Once more we want to stress that you can call many TensorFlow operations through cppflow.
```C++
```c++
input = cppflow::div(input, cppflow::tensor({127.5f}));
input = cppflow::add(input, cppflow::tensor({-1.0f}));
```
......@@ -4,16 +4,18 @@ This is an example for realtime (~30fps @ RTX 2070 Super) neural style transfer
The Python code has been taken from [this repo](https://github.com/cryu854/FastStyle). Neural style transfer can also be done using GANs. However, this method is much faster. Check this [post](https://www.tensorflow.org/tutorials/generative/style_transfer?hl=en) for more information on this topic.
Nevertheless, we highly recommend using a GPU.
Make sure to download and extract the folder containing multiple _SavedModels_ from the assets. The easiest way is to use the script provided in _../scripts/download_example_model.sh_. The application will by default look for a folder "models/" in "bin/data/".
Make sure to download and extract the folder containing multiple _SavedModels_ from the assets. The easiest way is to use the script provided in _../scripts/download_example_model.sh_. The application will by default look for a folder _models_ in _bin/data/_.
### TensorFlow2
***Note***: the computational graph uses a function which relies on the size of the tensor. This leads to a problem when saving the model. We have to specify the input dimensions using the following wrapper:
You can download the checkpoints for each model using the bash script _../scripts/download_training_examples.sh_.
This example inherits a small problem. The computational graph uses a function which relies on the size of the tensor. This leads to an error when saving the model. We have to specify the input dimensions using the following wrapper:
```python
@tf.function(input_signature=[tf.TensorSpec([None, 480, 640, 3], dtype=tf.float32)])
def model_predict(input_1):
return {'outputs': network(input_1, training=False)}
```
***Note***: you can download the checkpoints for each model from our assets and use the script _python/checkpoint2SavedModel.py_ to change the input signature!
***Note***: Run the python script _python/checkpoint2SavedModel.py_ on the downloadable checkpoints to change the input signatures!
### openFrameworks
In this example we will use the `ThreadedModel` class and augment the runModel function. This way we can modify the in and outputs inside the thread.
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment