tensorlayer tricks
1.0.0
尽管深度学习的研究继续改善世界,但我们使用大量技巧每天实现算法。
这是使用Tensorlayer的技巧的摘要。如果您发现在实践中特别有用的技巧,请打开拉动请求将其添加到文档中。如果我们发现它是合理和验证的,我们将合并它。
tensorlayer您可以通过示为git clone https://github.com/zsdonghao/tensorlayer.git下载整个存储库。pip安装,我们建议您安装主版本is_fix设置为True在Dropoutlayer中,并通过重复使用参数来构建训练/测试的不同图表。您还可以为不同图形设置不同的batch_size和噪声概率。当您使用Gaussiannoiselayer,batchnormlayer等。这是一个示例: def mlp ( x , is_train = True , reuse = False ):
with tf . variable_scope ( "MLP" , reuse = reuse ):
net = InputLayer ( x , name = 'in' )
net = DropoutLayer ( net , 0.8 , True , is_train , name = 'drop1' )
net = DenseLayer ( net , n_units = 800 , act = tf . nn . relu , name = 'dense1' )
net = DropoutLayer ( net , 0.8 , True , is_train , name = 'drop2' )
net = DenseLayer ( net , n_units = 800 , act = tf . nn . relu , name = 'dense2' )
net = DropoutLayer ( net , 0.8 , True , is_train , name = 'drop3' )
net = DenseLayer ( net , n_units = 10 , act = tf . identity , name = 'out' )
logits = net . outputs
net . outputs = tf . nn . sigmoid ( net . outputs )
return net , logits
x = tf . placeholder ( tf . float32 , shape = [ None , 784 ], name = 'x' )
y_ = tf . placeholder ( tf . int64 , shape = [ None , ], name = 'y_' )
net_train , logits = mlp ( x , is_train = True , reuse = False )
net_test , _ = mlp ( x , is_train = False , reuse = True )
cost = tl . cost . cross_entropy ( logits , y_ , name = 'cost' )更多在这里。
train_vars = tl . layers . get_variables_with_name ( 'MLP' , True , True )
train_op = tf . train . AdamOptimizer ( learning_rate = 0.0001 ). minimize ( cost , var_list = train_vars ) layers = tl . layers . get_layers_with_name ( network , "MLP" , True )如果您的数据集很大,则数据加载和数据增加将成为底层并减慢培训。为了加快数据处理,您可以:
如果您的数据大小足够小,可以输入计算机的内存,并且数据增加很简单。轻松调试,您可以:
tl.models x = tf . placeholder ( tf . float32 , [ None , 224 , 224 , 3 ])
# get the whole model
vgg = tl . models . VGG16 ( x )
# restore pre-trained VGG parameters
sess = tf . InteractiveSession ()
vgg . restore_params ( sess )
# use for inferencing
probs = tf . nn . softmax ( vgg . outputs ) x = tf . placeholder ( tf . float32 , [ None , 224 , 224 , 3 ])
# get VGG without the last layer
vgg = tl . models . VGG16 ( x , end_with = 'fc2_relu' )
# add one more layer
net = tl . layers . DenseLayer ( vgg , 100 , name = 'out' )
# initialize all parameters
sess = tf . InteractiveSession ()
tl . layers . initialize_global_variables ( sess )
# restore pre-trained VGG parameters
vgg . restore_params ( sess )
# train your own classifier (only update the last layer)
train_params = tl . layers . get_variables_with_name ( 'out' ) x1 = tf . placeholder ( tf . float32 , [ None , 224 , 224 , 3 ])
x2 = tf . placeholder ( tf . float32 , [ None , 224 , 224 , 3 ])
# get VGG without the last layer
vgg1 = tl . models . VGG16 ( x1 , end_with = 'fc2_relu' )
# reuse the parameters of vgg1 with different input
vgg2 = tl . models . VGG16 ( x2 , end_with = 'fc2_relu' , reuse = True )
# restore pre-trained VGG parameters (as they share parameters, we don’t need to restore vgg2)
sess = tf . InteractiveSession ()
vgg1 . restore_params ( sess ) import tensorflow as tf
import tensorlayer as tl
from keras . layers import *
from tensorlayer . layers import *
def my_fn ( x ):
x = Dropout ( 0.8 )( x )
x = Dense ( 800 , activation = 'relu' )( x )
x = Dropout ( 0.5 )( x )
x = Dense ( 800 , activation = 'relu' )( x )
x = Dropout ( 0.5 )( x )
logits = Dense ( 10 , activation = 'linear' )( x )
return logits
network = InputLayer ( x , name = 'input' )
network = LambdaLayer ( network , my_fn , name = 'keras' )
... > >> captions = [ "one two , three" , "four five five" ] # 2个 句 子
> >> processed_capts = []
> >> for c in captions :
> >> c = tl . nlp . process_sentence ( c , start_word = "<S>" , end_word = "</S>" )
> >> processed_capts . append ( c )
> >> print ( processed_capts )
... [[ '<S>' , 'one' , 'two' , ',' , 'three' , '</S>' ],
... [ '<S>' , 'four' , 'five' , 'five' , '</S>' ]] > >> tl . nlp . create_vocab ( processed_capts , word_counts_output_file = 'vocab.txt' , min_word_count = 1 )
... [ TL ] Creating vocabulary .
... Total words : 8
... Words in vocabulary : 8
... Wrote vocabulary file : vocab . txttl.nlp.create_vocab创建的txt词汇文件创建词汇对象 > >> vocab = tl . nlp . Vocabulary ( 'vocab.txt' , start_word = "<S>" , end_word = "</S>" , unk_word = "<UNK>" )
... INFO : tensorflow : Initializing vocabulary from file : vocab . txt
... [ TL ] Vocabulary from vocab . txt : < S > < / S > < UNK >
... vocabulary with 10 words ( includes start_word , end_word , unk_word )
... start_id : 2
... end_id : 3
... unk_id : 9
... pad_id : 0然后,您可以将单词映射到ID或VICE经文如下:
> >> vocab . id_to_word ( 2 )
... 'one'
> >> vocab . word_to_id ( 'one' )
... 2
>> > vocab . id_to_word ( 100 )
... '<UNK>'
> >> vocab . word_to_id ( 'hahahaha' )
... 9 > >> sequences = [[ 1 , 1 , 1 , 1 , 1 ],[ 2 , 2 , 2 ],[ 3 , 3 ]]
> >> sequences = tl . prepro . pad_sequences ( sequences , maxlen = None ,
... dtype = 'int32' , padding = 'post' , truncating = 'pre' , value = 0. )
... [[ 1 1 1 1 1 ]
... [ 2 2 2 0 0 ]
... [ 3 3 0 0 0 ]]sequence_length > >> data = [[ 1 , 2 , 0 , 0 , 0 ], [ 1 , 2 , 3 , 0 , 0 ], [ 1 , 2 , 6 , 1 , 0 ]]
> >> o = tl . layers . retrieve_seq_length_op2 ( data )
> >> sess = tf . InteractiveSession ()
> >> tl . layers . initialize_global_variables ( sess )
> >> print ( o . eval ())
... [ 2 3 4 ]tl.files.load_and_assign_npz还原tl.files.load_and_assign_npz_dict还原tl.files.load_ckpt还原。 TL可以与其他TF包装器进行交互,这意味着,如果您找到其他包装器实施的一些代码或模型,则可以使用它!
BatchNormLayer的decay默认值为0.9,大型数据集设置为0.999。