keras

keras 简记

keras 框架使用

1.we can start with import of tensorflow

1
2
import tensorflow as tf
print(tf.__version__)

2.We always need normalize our data

1
2
training_images  = training_images / 255.0
test_images = test_images / 255.0

3.Then, let’s design our model.

Sequential: That defines a SEQUENCE of layers in the neural network

Flatten: Remember earlier where our images were a square, when you printed them out? Flatten just takes that square and turns it into a 1 dimensional set.

Dense: Adds a layer of neurons

Each layer of neurons need an activation function to tell them what to do. There’s lots of options, but just use these for now.

Relu effectively means “If X>0 return X, else return 0” – so what it does it it only passes values 0 or greater to the next layer in the network.

Softmax takes a set of values, and effectively picks the biggest one, so, for example, if the output of the last layer looks like [0.1, 0.1, 0.05, 0.1, 9.5, 0.1, 0.05, 0.05, 0.05], it saves you from fishing through it looking for the biggest value, and turns it into [0,0,0,0,1,0,0,0,0] – The goal is to save a lot of coding!

1
2
3
model = tf.keras.models.Sequential([tf.keras.layers.Flatten(), 
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)])

4.The next thing to do is to actually build it. We do this by compiling it with an optimizer and loss function.

1
2
3
4
5
model.compile(optimizer = tf.train.AdamOptimizer(),
loss = 'sparse_categorical_crossentropy',
metrics=['accuracy'])

model.fit(training_images, training_labels, epochs=5)

5.we can evaluate our model by using model.evaluate

1
model.evaluate(test_images, test_labels)

addtional:

If I want to stop my training when I reach a desired value, I can use the myCallbak class

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
import tensorflow as tf
print(tf.__version__)

class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('loss')<0.4):
print("\nReached 60% accuracy so cancelling training!")
self.model.stop_training = True

callbacks = myCallback()
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images/255.0
test_images=test_images/255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')
model.fit(training_images, training_labels, epochs=5, callbacks=[callbacks])

Keras with convolutions

1.The first thing we need know is that the first convolution expects a single tensor containing everything. So instead of 60000 28x28x1 items in a list,we have a single 4D list that is 60000x28x28x1.

1
2
3
4
5
6
7
import tensorflow as tf
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images.reshape(60000, 28, 28, 1)
training_images=training_images / 255.0
test_images = test_images.reshape(10000, 28, 28, 1)
test_images=test_images/255.0

2.Instread of the input layer at the top, we’re going to add a convolution. The parameters are:

  • the number of filter. Purely arbitrary, but good to start with something in the order of 32
  • the size of the Convolution
  • the activation function
  • in the first layer, the shape of the input data
1
2
3
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),

keras imageGenerator

顺便记两个单词 adorable 可爱的 dominant 占优势的,突出的 condense 使压缩

Let’s set up our data generators that will read pictures in our source folders,convert them to float32 tensors,and feed them(with their labels)to our network.
We’ll have one generator for the training images and one for the validation images. Our generators will yield batches of images of size 300x300 and their labels (binary).
In Keras this can be done via the keras.preprocessing.image.ImageDataGenerator class using the rescale parameter. This ImageDataGenerator class allows you to instantiate generators of augmented image batches (and their labels) via .flow(data, labels) or .flow_from_directory(directory). These generators can then be used with the Keras model methods that accept data generators as inputs: fit_generator, evaluate_generator, and predict_generator.

1
2
3
4
5
6
7
8
9
10
11
from tensorflow.keras.preprocessing.image import ImageDataGenerator

# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1/255)

train_generator = train_datagen.flow_from_directory(
'/tmp/horse-or-human',
target_size=(300,300),
batch_size=128,
class_mode='binary'
)

keras
http://example.com/2019/10/03/2019-10-03-keras简记/
Author
Neko kiku
Posted on
October 3, 2019
Licensed under