一旦我們選擇了一個架構并設置了我們的超參數,我們就進入訓練循環,我們的目標是找到最小化損失函數的參數值。訓練后,我們將需要這些參數來進行未來的預測。此外,我們有時會希望提取參數以在其他上下文中重用它們,將我們的模型保存到磁盤以便它可以在其他軟件中執行,或者進行檢查以期獲得科學理解。
大多數時候,我們將能夠忽略參數聲明和操作的具體細節,依靠深度學習框架來完成繁重的工作。然而,當我們遠離具有標準層的堆疊架構時,我們有時需要陷入聲明和操作參數的困境。在本節中,我們將介紹以下內容:
-
訪問用于調試、診斷和可視化的參數。
-
跨不同模型組件共享參數。
import tensorflow as tf
我們首先關注具有一個隱藏層的 MLP。
torch.Size([2, 1])
net = nn.Sequential()
net.add(nn.Dense(8, activation='relu'))
net.add(nn.Dense(1))
net.initialize() # Use the default initialization method
X = np.random.uniform(size=(2, 4))
net(X).shape
(2, 1)
net = nn.Sequential([nn.Dense(8), nn.relu, nn.Dense(1)])
X = jax.random.uniform(d2l.get_key(), (2, 4))
params = net.init(d2l.get_key(), X)
net.apply(params, X).shape
(2, 1)
6.2.1. 參數訪問
讓我們從如何從您已知的模型中訪問參數開始。
當通過類定義模型時Sequential,我們可以首先通過索引模型來訪問任何層,就好像它是一個列表一樣。每個層的參數都方便地位于其屬性中。
When a model is defined via the Sequential class, we can first access any layer by indexing into the model as though it were a list. Each layer’s parameters are conveniently located in its attribute.
Flax and JAX decouple the model and the parameters as you might have observed in the models defined previously. When a model is defined via the Sequential class, we first need to initialize the network to generate the parameters dictionary. We can access any layer’s parameters through the keys of this dictionary.
When a model is defined via the Sequential class, we can first access any layer by indexing into the model as though it were a list. Each layer’s parameters are conveniently located in its attribute.
我們可以如下檢查第二個全連接層的參數。
OrderedDict([('weight',
tensor([[-0.2523, 0.2104, 0.2189, -0.0395, -0.0590, 0.3360, -0.0205, -0.1507]])),
('bias', tensor([0.0694]))])
dense1_ (
Parameter dense1_weight (shape=(1, 8), dtype=float32)
Parameter dense1_bias (shape=(1,), dtype=float32)
)
FrozenDict({
kernel: Array([[-0.20739523],
[ 0.16546965],
[-0.03713543],
[-0.04860032],
[-0.2102929 ],
[ 0.163712 ],
[ 0.27240783],
[-0.4046879 ]], dtype=float32),
bias: Array([0.], dtype=float32),
})
我們可以看到這個全連接層包含兩個參數,分別對應于該層的權重和偏差。
6.2.1.1. 目標參數
請注意,每個參數都表示為參數類的一個實例。要對參數做任何有用的事情,我們首先需要訪問基礎數值。做這件事有很多種方法。有些更簡單,有些則更通用。以下代碼從返回參數類實例的第二個神經網絡層中提取偏差,并進一步訪問該參數的值。
(torch.nn.parameter.Parameter, tensor([0.0694]))
參數是復雜的對象,包含值、梯度和附加信息。這就是為什么我們需要顯式請求該值。
除了值之外,每個參數還允許我們訪問梯度。因為我們還沒有為這個網絡調用反向傳播,所以它處于初始狀態。
True
(mxnet.gluon.parameter.Parameter, array([0.]))
Parameters are complex objects, containing values, gradients, and additional information. That is why we need to request the value explicitly.
In addition to the value, each parameter also allows us to access the gradient. Because we have not invoked backpropagation for this network yet, it is in its initial state.
array([[0., 0., 0., 0., 0., 0., 0., 0.]])
(jaxlib.xla_extension.Array, Array([0.], dtype=float32))
Unlike the other frameworks, JAX does not keep a track of the gradients over the neural network parameters, instead the parameters and the network are decoupled. It allows the user to express their computation as a Python function, and use the grad transformation for the same purpose.
(tensorflow.python.ops.resource_variable_ops.ResourceVariable,
<tf.Tensor: shape=(1,), dtype=float32, numpy=array([0.], dtype=float32)>)
電子發燒友App





創作
發文章
發帖
提問
發資料
發視頻
上傳資料賺積分
評論