The convolution demo in Stanford’s course on CNNs (CS231n) explains this well. Basically you take the dot product of the filter and the input once for every entry of your output volume, shifting the filter (width and height-wise) by `stride`

units around the input to fill up the output.

For TensorFlow’s implementation of conv2d, you can read the code for the function `convolution`

here.

For SAME padding, the output dimensions are ceil(input_dimensions/stride). We would pad the input with zeros if the input dimensions are not divisible by the stride.

Let me know if you have further questions.

]]>I think that the maths probably won’t be a problem – if you can understand this post on Kalman Filters and know derivatives etc well enough to roughly get what’s going on in back-propagation (neural networks) you should be fine. Most of the emphasis is on practical implementation and intuition.

No I haven’t posted the list of free deep learning resources I’ve found most helpful yet – thanks for reminding me! Will keep you updated.

]]>Thank you in advance ðŸ™‚ ]]>

`-1`

means that the length can take any value there. Specifically, the first position refers to the number of examples you feed in. So the -1 in the first position means you can feed in as many or as few examples as you like in one go.
You want this because you may change your batch size (or your final batch may have a different number of examples if the total number of training examples may not be divisible by your batch size). Hope this helps!

]]>x = tf.reshape(x, shape=[-1, 28, 28, 1]) thanks. ]]>

– Kasisto (engaging with customers)

– Howdy (workplace bot), Cleo (personal finance)

– X.ai, Legal Robot, textio (recruiting)

– X.ai

Hope this helps! ]]>