我是 TensorFlow 和 TensorFlow Probability 的新手。我正在使用這個網路進行回歸任務。
def normal_sp(params):
return tfd.Normal(loc=params[:,0:1], scale=1e-3 tf.math.softplus(0.05 * params[:,1:2]))
kernel_divergence_fn=lambda q, p, _: tfp.distributions.kl_divergence(q, p) / (x.shape[0] * 1.0)
bias_divergence_fn=lambda q, p, _: tfp.distributions.kl_divergence(q, p) / (x.shape[0] * 1.0)
inputs = Input(shape=(1,),name="input layer")
hidden = tfp.layers.DenseFlipout(50,bias_posterior_fn=tfp.layers.util.default_mean_field_normal_fn(),
bias_prior_fn=tfp.layers.default_multivariate_normal_fn,
kernel_divergence_fn=kernel_divergence_fn,
bias_divergence_fn=bias_divergence_fn,activation="relu",name="DenseFlipout_layer_1")(inputs)
hidden = tfp.layers.DenseFlipout(100,bias_posterior_fn=tfp.layers.util.default_mean_field_normal_fn(),
bias_prior_fn=tfp.layers.default_multivariate_normal_fn,
kernel_divergence_fn=kernel_divergence_fn,
bias_divergence_fn=bias_divergence_fn,activation="relu",name="DenseFlipout_layer_2")(hidden)
hidden = tfp.layers.DenseFlipout(100,bias_posterior_fn=tfp.layers.util.default_mean_field_normal_fn(),
bias_prior_fn=tfp.layers.default_multivariate_normal_fn,
kernel_divergence_fn=kernel_divergence_fn,
bias_divergence_fn=bias_divergence_fn,activation="relu",name="DenseFlipout_layer_3")(hidden)
params = tfp.layers.DenseFlipout(2,bias_posterior_fn=tfp.layers.util.default_mean_field_normal_fn(),
bias_prior_fn=tfp.layers.default_multivariate_normal_fn,
kernel_divergence_fn=kernel_divergence_fn,
bias_divergence_fn=bias_divergence_fn,name="DenseFlipout_layer_4")(hidden)
dist = tfp.layers.DistributionLambda(normal_sp)(params)
model_vi = Model(inputs=inputs, outputs=dist)
model_vi.compile(Adam(learning_rate=0.002), loss=NLL)
model_params = Model(inputs=inputs, outputs=params)
我的問題與損失函式有關:
在此處發布的示例中,作者將 kl 散度添加到損失函式 https://www.tensorflow.org/probability/api_docs/python/tfp/layers/DenseFlipout
kl = sum(model.losses)
loss = neg_log_likelihood kl
但在此處的示例中https://colab.research.google.com/github/tensorchiefs/dl_book/blob/master/chapter_08/nb_ch08_03.ipynb
損失函式就是 NLL。我的問題是:我必須手動添加 kl 散度還是 tensorflow 自動計算它?在第一種情況下,由于 model.losses 似乎不起作用,我該怎么做?感謝任何幫助的人
uj5u.com熱心網友回復:
如果您使用 Keras 進行訓練,則每層損失 (KL) 包含在整體損失中(我 90% 確信這是正確的——您可以通過覆寫 kl_divergence_fn 來檢查以回傳一些荒謬的值,看看您的整體損失變得荒謬)。
在檔案的示例中(嗯,有點古老),keras 沒有進行培訓;而是將優化器應用于手動寫入的損失,因此必須獲取所有每層損失并將它們添加進去。
轉載請註明出處,本文鏈接:https://www.uj5u.com/ruanti/495342.html
上一篇:如何從TFRecordDataset計算steps_per_epoch
下一篇:此圖任務中最小化道路成本的演算法