#robotics Logs

Dec 17 2018

#robotics Calendar

12:02 AM rue_mohr: wow, I'm amassing a lot of documents that are going to take me a long time to work thru
07:31 AM HappySocks: Hi! does anyone know what's the difference between an AIV and an AMR?
07:32 AM deshipu: at first sight it seems to be two letters, M and R
07:32 AM HappySocks: :)
07:35 AM deshipu: looks like AIV is a recent coinage and the difference is that it's much more buzzwordy and meaningless
07:36 AM HappySocks: As far as I know, Autonomous Intelligent Vehicles operate in known environments and make their own paths, in contrast to Autonomous Mobile Robots which operate in unknown environments, so autonomous navigation vs SLAM and navigation
07:37 AM deshipu: I would also wager that AIVs are considerably larger
07:37 AM HappySocks: but the underlying technology doesn't seem that different to warrant a different term, surely a robot capable of navigating autonomously is able to do SLAM and vice versa? Isn't the difference more of an external factor than a robot defining feature?
07:38 AM deshipu: HappySocks: I would say that every AIV is an AMR
07:38 AM deshipu: HappySocks: yeah, it's just a marketing term
07:39 AM deshipu: I can't see any source that would use "AIV" that is older than a year or two
07:39 AM HappySocks: deshipu: Thanks, that's what I thought. I was scared of missing something though.
09:03 AM rue_mohr: did someone say that hidden layers are not actaully trained?
09:07 AM rue_mohr: armyofevilrobots, why were people working on neural networks in 1962?
09:07 AM rue_mohr: analog computers?
09:10 AM deshipu: mostly simple visual recognition, iirc
09:10 AM deshipu: perceptron and such
09:11 AM rue_mohr: how do the weights of the hidden layers get set up?
09:11 AM deshipu: https://en.wikipedia.org/wiki/Perceptron#Learning_algorithm
09:12 AM rue_mohr: mur I have a LOT of stuff to read, I need short generalizations
09:12 AM deshipu: in short, the original perceptron didn't have hidden layers
09:12 AM deshipu: the later versions used backpropagation
09:13 AM rue_mohr: and in backprop you adjust your output layer weights and push the weights back into the hidden layers somehow?
09:14 AM deshipu: you calculate the erros and use that to adjust the weights
09:14 AM rue_mohr: thats what you do with no hidden layers
09:15 AM rue_mohr: the problem is that you cant sight where the errors are in a large hidden layer mesh
09:16 AM deshipu: well, I would link you to the description of the algorithm on wikipedia, but you would again complain
09:16 AM rue_mohr: if you knew how big the stack was that I'd already been given you would know why
09:16 AM deshipu: I guess you are too busy for this discussion
09:16 AM deshipu: see you when you have some more time
09:16 AM rue_mohr: dude, I have 4 huge pdfs and a pretty large source file to read thru yet
09:18 AM deshipu: you better get right down to it then
09:18 AM rue_mohr: I'm starting to think that my 'perfection game' method of adjusting weights is more efficient, but I cant try anything yet (grrr)
09:18 AM rue_mohr: I'm reading now (duh)
09:19 AM deshipu: generally speaking, there was no single preferred algorithm, as that was basically the main subject of research back then
09:19 AM rue_mohr: damn I have to go to work in 2 mins
09:20 AM deshipu: the general idea is that you adjust the weights on the hidden layers proportionally to how much they contributed to the error
09:21 AM rue_mohr: aka you remember how active each node was and add or subtract to its weight based on the output error
10:00 AM deshipu: and based on the sum of all the weights leading to that node
10:00 AM deshipu: from the output node with the error
11:23 AM veverak: rue_mohr: you can spot backpropagation across hidden layers in my code
11:23 AM veverak: it's also in the presentation
11:23 AM veverak: change of weight to neuron is based on the error of that neurone
11:24 AM veverak: for output layer, error of neurons is difference of output compared to desired output
11:25 AM veverak: for hidden layers, error of neurons is based on error of neurons of upper layer, derivation of actual error output and one other thing...
01:13 PM ldlework: http://logos.ldlework.com/caps/2018-12-17-19-11-36.png
06:55 PM rue_mohr: I'm still burried in reading
06:55 PM rue_mohr: I skimmed over your code, have yet to do a detailed read
06:56 PM rue_mohr: ldlework, :)
09:23 PM rue_mohr: oh I put the hasel actuator on youtube
09:24 PM rue_mohr: https://www.youtube.com/watch?v=c5fCZWY1ljU
09:24 PM rue_mohr: short video :)
09:37 PM deusexmachina: rue_mohr, why don't you form it into a solid mass?
09:38 PM rue_mohr: hu?
09:40 PM deusexmachina: looks like a gelatin... There's ways to harden it and polarize it, not that I know how
09:41 PM deusexmachina: apparently stretching it while you chemically alter it is a very delicate task
09:45 PM rue_mohr: no its oil
09:49 PM rue_mohr: dielectric const ~3
09:50 PM deusexmachina: try to make it into a solid with unidirectional actuation
09:52 PM rue_mohr: its was a voltage test
09:52 PM deusexmachina: ahh
09:52 PM deusexmachina: how much can it contract by in terms of volume?
09:53 PM rue_mohr: its not physically designed to contract
09:54 PM Tom_itx: what's that for?
09:59 PM Tom_itx is now known as Tom_L
09:59 PM rue_mohr: its a hasel, does it matter?
09:59 PM Tom_L: no
09:59 PM rue_mohr: hasel is the latest thing
09:59 PM Tom_L: does the current affect the contraction rate?
09:59 PM rue_mohr: its like 40000V
10:00 PM rue_mohr: ooo non-newtonian feromagnetic fluid
10:02 PM deusexmachina: you guys seen this? https://devblogs.nvidia.com/nv-wavenet-gpu-speech-synthesis/
11:48 PM rue_mohr: heh
11:49 PM rue_mohr: I wonder if its proof that a complex enough video system can do anything
11:49 PM rue_mohr: I wonder how I could make the more awesome shop ever
11:59 PM rue_mohr: Alexia, turn off everything.