Back
[10:02:01] <[VEGETA]> back!
[14:16:28] <[VEGETA]> <[VEGETA]> i just read that rtab-map doesn't support multi cameras!
[14:16:28] <[VEGETA]> <[VEGETA]> now i need another way of detecting moving obstacles rather than multi kinect solution
[14:16:29] <[VEGETA]> <[VEGETA]> defiant, i thought of putting a cheap laser scanner (long range) in the top of the room to do scanning of moving objects
[14:16:29] <[VEGETA]> <[VEGETA]> but can it's data be added to the generated map the way I want?
[14:19:16] <deshipu> [VEGETA]: I think I would look for a robot that is similar to yours and integrated with ROS, and modify that
[14:19:33] <[VEGETA]> such as?
[14:19:44] <[VEGETA]> all robots are pricey
[14:19:58] <deshipu> I mean the code for them
[14:20:06] <deshipu> not the physical robot
[14:20:15] <[VEGETA]> the robot itself is not the problem, the detection mechanism is
[14:20:27] <[VEGETA]> how to detect dynamic obstacles and track them
[14:20:29] <[VEGETA]> that is it
[14:21:04] <deshipu> well, the most foolproof method of detecting obstacles for me so far was bumping into them
[14:21:22] <deshipu> usually with my shin
[14:23:52] <[VEGETA]> troll ^
[14:24:02] <[VEGETA]> obviously that is not required here
[14:24:20] <deshipu> it's good to always have a bumper sensor as the last resort
[14:26:13] <[VEGETA]> yes
[14:26:27] <[VEGETA]> but for research proposal, u should have a plan
[14:27:15] <deshipu> well, isn't researtch all about figuring out how to do things?
[14:27:28] <deshipu> I mean, if you already know how, it's not much research?
[14:28:03] <[VEGETA]> i want to achieve one idea
[14:28:23] <[VEGETA]> which is detecting dynamic obstacles and modify robot movement plan according to it
[14:29:19] <deshipu> I assume that by "dynamic obstacles" you basically mean "people walking around the room"?
[14:29:40] <[VEGETA]> yes, people or any other type of obstacles
[14:29:51] <deshipu> and not, say, a swarm of bees?
[14:29:54] <[VEGETA]> anything that moves in the 3d map
[14:30:34] <deshipu> is the robot flying?
[14:30:37] <[VEGETA]> no
[14:30:43] <deshipu> then why 3d?
[14:30:45] <[VEGETA]> ground indoor rover
[14:30:56] <[VEGETA]> well,... i guess it is better
[14:31:08] <[VEGETA]> as the pkg already does that
[14:31:16] <[VEGETA]> do you have an idea
[14:32:54] <deshipu> well, tracking anything in 3d space usually involves comparing consecutive frames and finding elements that look similar on them, but changed places
[14:33:27] <deshipu> there are several algorithms to do that, usually quite computing power intensive
[14:33:48] <deshipu> figuring out what "similar" actually means here is the hard part
[14:33:57] <SpeedEvil> Depends - to purely work out distance, with two parallel cameras can be very lightweight
[14:34:05] <SpeedEvil> to generate a point-cloud
[14:34:32] <deshipu> SpeedEvil: how do you know which pixels correspond to which on the two images?
[14:34:58] <[VEGETA]> speedevil, i thought of "marking" point clouds but i don't think it will work
[14:35:00] <SpeedEvil> statistics
[14:35:17] <[VEGETA]> the slam pkg i am gonna use is rtab-map
[14:35:24] <deshipu> SpeedEvil: that doesn't help much
[14:35:25] <[VEGETA]> it doesn't allow multiple kinects
[14:35:38] <[VEGETA]> so having one kinect is 4m max range
[14:35:45] <[VEGETA]> won't be good enough
[14:35:58] <[VEGETA]> thus i thought of laser range detectors (2d)
[14:36:14] <[VEGETA]> but when laser finds a moving target
[14:36:30] <[VEGETA]> how can i adjust it in the 3d map created by rtabmap
[14:36:44] <[VEGETA]> i mean, how to add it to the map as a dynamic element
[14:36:56] <[VEGETA]> if that is solved, then problem is easier i guess
[14:36:59] <SpeedEvil> deshipu: Unless you're in a pessimal case, if you slide the images in X against each other, there will be no more than one peak for each statistically significant blob
[14:37:21] <SpeedEvil> deshipu: yes, it's going to break if you're looking at a repeating pattern
[14:40:37] <[VEGETA]> speedevil: what do u think about my lines above please?
[14:40:49] <SpeedEvil> [VEGETA]: I don't.
[14:41:02] <SpeedEvil> I don't know about that particular OS, or kinect in detail
[14:41:15] <[VEGETA]> i am talking about ROS
[14:41:23] <SpeedEvil> yes
[14:41:58] <[VEGETA]> you don't know about ros
[14:42:43] <[VEGETA]> u seem to know about mapping so i thought you might have an idea
[14:42:54] <deshipu> SpeedEvil: sliding images against each other and scanning for similarity is computationally expensive in my book
[14:43:11] <SpeedEvil> deshipu: It's really not.
[14:43:21] <SpeedEvil> deshipu: Unless you mean 'I can't do it on an 8 bit arduino'
[14:43:28] <deshipu> I guess I should stop thinking about microcontrollers
[14:43:44] <SpeedEvil> STMf32 is quite fast enough to do the above
[14:43:53] <[VEGETA]> you could use pi 3 at least
[14:45:13] <deshipu> SpeedEvil: depends which one
[14:45:22] <[VEGETA]> deshipu, can you help with my question
[14:45:23] <deshipu> SpeedEvil: you mean the cortex-m4 ones, I presume?
[14:45:58] <deshipu> [VEGETA]: sorry, I'm in the same boat as SpeedEvil -- I have some vague ideas about how such things are done in theory, but no idea about ros or that particular package
[14:46:23] <deshipu> [VEGETA]: I wonder if there is a ROS channel on freenode
[14:46:29] <deshipu> [VEGETA]: or if they have a forum
[14:46:37] <[VEGETA]> ros channel here is dead
[14:46:39] <[VEGETA]> xD
[14:46:49] <[VEGETA]> only me and some 2 guys maximum
[14:46:53] <deshipu> well, that's a glorious opportunity for you to revive it
[14:47:02] <[VEGETA]> if you have any idea even by theory that is enough
[14:47:16] <deshipu> well, I already told you about the theory
[14:47:18] <[VEGETA]> deshipu, i don't have the dragon balls right now
[14:47:34] <deshipu> I'm sure the dragon is happy about that
[14:47:58] <[VEGETA]> shenlong can solve any task
[14:48:10] <[VEGETA]> what is your theory then
[14:48:22] <[VEGETA]> detecting dynamic targets and track them
[14:49:47] <deshipu> well, you compare your images/point clouds/whatever data you have between the two frames, locate clusters that are similar but moved, calculate the movement from the difference
[14:50:30] <[VEGETA]> that was my idea before, but only to point clouds
[14:50:39] <[VEGETA]> what is clusters?
[14:50:40] <deshipu> with high enough frame rate the differences will be so small, that you will be able to find the clusters with a relatively low-radius local filter
[14:50:54] <deshipu> [VEGETA]: I mean clusters of points
[14:51:08] <deshipu> like, a group of points that are close together and move together?
[14:51:15] <[VEGETA]> you mean a group of points that represents a body
[14:52:14] <deshipu> note that the whole idea of an "object" is just an artifact of how human mind works to make sense of the world -- the reality has no "objects", it's just one big continuous reality
[14:58:26] <[VEGETA]> what is that? XD
[14:59:26] <deshipu> so in order to make a robot "see" objects as objects, you effectively have to try to recreate that part of the human brain
[14:59:35] <deshipu> more or less exactly
[15:09:28] <[VEGETA]> adjusting point clouds is a good idea, if it is possible to be made
[15:10:09] <[VEGETA]> but the damn kinect offers only 5m of range xD
[15:10:35] <[VEGETA]> i initially thought of putting multiple kinect devices in the room to cover all objects
[15:10:55] <[VEGETA]> then use your idea to separate dynamic obstacles
[15:11:32] <[VEGETA]> but then realized that rtab-map package (which is the main slam pkg) can not have multiple online cameras
[15:11:37] <[VEGETA]> on one map
[15:17:14] <SpeedEvil> deshipu: Err - no.
[15:17:46] <SpeedEvil> deshipu: There is a vast differnece between what the human brain does, and - for example - working out a contiguous point cloud of objects that reflect or absorb light.
[15:18:41] <SpeedEvil> Going from that point cloud, and knowing 'I can drive over surfaces which go up or down 30 degrees and have no sharp bits and have a passage wide enough' does not require any complex computer vision problems and is very useful.
[15:19:23] <SpeedEvil> If you have a robot vacuum - say - that's plenty to drive around cleaning and avoiding cats.
[15:19:42] <[VEGETA]> speedevil, the computer needs to keep tracking of all these bodies
[15:19:46] <SpeedEvil> Simply as you avoid it as you do any object >xcm.
[15:19:49] <SpeedEvil> Not really
[15:19:50] <[VEGETA]> keep calculating
[15:20:08] <[VEGETA]> and representing them in a map
[15:20:24] <SpeedEvil> It needs to keep track of the bodies and build a world view if it makes movements such that it requires to know things it can't sense.
[15:21:07] <SpeedEvil> In the case of a robot vacuum, for example, retaining a map of much more than rooms, and things to clean/not clean may not be useful.
[15:21:12] <SpeedEvil> Or a lawnmower.
[15:21:36] <[VEGETA]> let us assume that having a map is essential to the robot
[15:21:50] <[VEGETA]> it needs to know where are the dynamic obstacles
[15:21:53] <[VEGETA]> and how they move
[15:21:53] <SpeedEvil> If you are doing things like - for example - drifting round a racetrack - then yes, you need to retain a detailed map
[15:22:22] <SpeedEvil> Only in some cases, if you can in all cases stop before you do damage, you don't need to remember dynamic obstacles.
[15:22:37] <SpeedEvil> For certain classes of robot anyway.
[15:23:01] <SpeedEvil> If your cleaner robot gets stuck inside theoretically soluble mazes, that is unlikely to be a real-world issue.
[15:23:53] <deshipu> SpeedEvil: I was talking about recognizing objects, not route planning
[15:24:09] <[VEGETA]> route planning is the next stage
[15:24:14] <SpeedEvil> deshipu: Sure - it's different classes of problem.
[15:24:31] <[VEGETA]> first i have to detect dynamic obstacles
[15:24:38] <[VEGETA]> estimate their future movements
[15:24:39] <deshipu> [VEGETA]: can you get the point clouds from all your kinects, and merge them into a single point cloud, knowing their relative positions?
[15:24:45] <SpeedEvil> deshipu: sometimes 'object of interest', 'road' and 'everything else that I assume I cannot drive through' is enough
[15:24:46] <[VEGETA]> then adjust route
[15:25:27] <[VEGETA]> deshipu, well I didn't search but I guess yes I can access point clouds themselves
[15:25:28] <SpeedEvil> Estimating future position of dynamic obstacles also raises the issue of mispredictions
[15:25:33] <deshipu> SpeedEvil: still, as soon as the word "object" appears, the task becomes hard, as you have to at least approximate what humans understand as "object"
[15:25:57] <[VEGETA]> speedevil, don't worry, you just assume they go in the same direction and the same speed
[15:26:06] <SpeedEvil> deshipu: Sure. For some classes of object, you can pull them tractably from the point cloud.
[15:26:31] <SpeedEvil> deshipu: For example, tennis balls on a tennis court that you need to pick up.
[15:27:34] <SpeedEvil> If it's 'find a duck' - then no, that is actually hard.
[15:28:04] <[VEGETA]> no no no
[15:28:12] <[VEGETA]> i don't care about what is that object
[15:28:31] <[VEGETA]> i only care about that it is a moving object i need to keep away from
[15:28:37] <[VEGETA]> being a duck or a dog
[15:28:56] <[VEGETA]> actually, it is fine by me to substitute it by a cylinder in the 3d map
[15:31:09] <deshipu> SpeedEvil: I'm not saying you have to reproduce human vision
[15:31:20] <deshipu> SpeedEvil: but take a ladybug sitting on a tennis ball
[15:31:24] <deshipu> SpeedEvil: one object or two?
[15:32:55] <SpeedEvil> It almost doesn't matter - if you can accept it not working when a leaf or ... blows onto it.
[15:35:26] <[VEGETA]> it is one object
[15:36:17] <[VEGETA]> the robot needs to detect that ball, estimate its future pose, and adjust its movement accordingly
[15:36:22] <[VEGETA]> that is what i am trying to achieve
[15:36:37] <[VEGETA]> that is what i mean by "dynamic obstacle"
[15:36:56] <[VEGETA]> static obstacles are easy because they are part of the 3d map itself
[15:37:13] <SpeedEvil> Unobscured balls in free space are perhaps one of the simplest CV problems
[15:40:09] <[VEGETA]> ?
[15:41:40] <deshipu> [VEGETA]: what if the ladybug is also moving along the ball?
[15:42:03] <[VEGETA]> do you think a mobile robot will care
[15:42:12] <[VEGETA]> if it can detect it that is
[15:42:20] <[VEGETA]> ignore it xD
[15:42:28] <deshipu> ok, take a human waving hands
[15:42:33] <SpeedEvil> http://www.ebay.co.uk/itm/3D-Photograph-Stereoscopic-Camera-Lens-w-Clip-For-iPhone-Smart-Phone-Tablet-/301969841859?hash=item464ece1ec3:g:hSYAAOSwOtBXS~ZF
[15:42:36] <deshipu> to the sides
[15:42:45] <deshipu> one object or multiple?
[15:43:16] <SpeedEvil> That depends if I have a chainsaw.
[15:43:40] <deshipu> the robot can collide with the hand, so you should probably predict its movement
[15:44:52] <[VEGETA]> yes, obstacles that cause problems must be detected
[15:44:59] <[VEGETA]> let's not argue more about that
[15:45:24] <deshipu> so even though the hand is physically attached to the human, it should be tracked as a separate object
[15:46:42] <[VEGETA]> for the mobile robot in mind, it won't even reach the hand
[15:46:53] <[VEGETA]> thus consider it one object with the body
[15:48:05] <deshipu> then make it a leg
[15:48:24] <deshipu> I mean the general problem, not just this one example case
[15:49:09] <deshipu> soon you find yourself simulating quantum mechanics...
[15:51:52] <[VEGETA]> one solution to that problem is to consider all connected points as one object
[15:52:14] <[VEGETA]> u can not have your leg separated from your body and still works :)
[15:52:55] <deshipu> you can carry a cat, that jumps out of your hands
[15:53:17] <deshipu> when the robot first sees you with the cat, you are connected
[15:54:16] <[VEGETA]> then when the cat is separated, it is considered a new object
[15:54:18] <[VEGETA]> so?
[15:55:19] <deshipu> how about a robot that avoids being kicked?
[15:55:34] <[VEGETA]> what is your point of all this?
[15:55:43] <deshipu> it's not enough to track just the human as a single object
[15:55:45] <[VEGETA]> I am talking about the main concept
[15:55:52] <deshipu> you have to track the leg
[15:56:28] <deshipu> I'm just trying to point out that the problem is much more complex when you look at the corner cases, than it seems at first
[15:56:46] <[VEGETA]> ok, let us solve the main problem
[15:56:54] <[VEGETA]> and let the other ones for another time
[15:57:04] <deshipu> teleportation
[15:57:23] <deshipu> that's a solution that lets the robot get from one point to another without collisions
[15:57:32] <deshipu> simple, effective
[15:58:23] <[VEGETA]> xD
[15:58:32] <[VEGETA]> if i can do it, i won't say no
[15:58:52] <[VEGETA]> man, I want to be able to perform that task only... for now
[15:59:04] <[VEGETA]> improvements and side cases can be for later
[15:59:15] <[VEGETA]> you had amazing ideas to help with
[15:59:59] <deshipu> it's often good to consider what your robot needs to do in detail, because there might sometimes be a simpler solution than how humans do it
[16:00:32] <deshipu> for instance, navigating a room of humans
[16:00:40] <deshipu> a room with walking humans in it
[16:00:49] <deshipu> maybe you don't need to track all of them
[16:01:25] <[VEGETA]> substitute that human with a cylinder and attach movement data to it... and it is done!
[16:01:37] <[VEGETA]> by this the robot will know enough
[16:02:11] <[VEGETA]> that that target is moving towards it in v velocity to an estimated path p...
[16:02:16] <deshipu> right, but humans do it by visual processing and tracking
[16:02:31] <deshipu> maybe it's better to just use some kind of proximity sensor
[16:02:41] <[VEGETA]> ^ robots should do this, but much simpler
[16:02:45] <deshipu> how fast is the robot compared to the human?
[16:03:26] <deshipu> if the robot is slower, no chance of avoiding a human, even if you can track
[16:03:48] <deshipu> if the robot is much faster, it can probably just avoid the human without having to track
[16:04:05] <deshipu> simply treating him as a static obstacle
[16:04:10] <[VEGETA]> it has to know where it is
[16:04:13] <[VEGETA]> where to go
[16:04:19] <[VEGETA]> what path to follow
[16:04:30] <deshipu> does it?
[16:04:40] <deshipu> maybe it only needs to know the direction?
[16:05:14] <[VEGETA]> no, i need to specify a goal point in a 3d map
[16:05:20] <deshipu> it all depends on what kind of robot it is and what it is supposed to do
[16:05:25] <deshipu> gtg
[16:06:40] <[VEGETA]> well, it is a car-like robot that has goal points to reach to
[16:08:01] <SpeedEvil> you're trying to make a bot for rocket league aren't you.
[16:10:00] <veverak> lol
[16:10:04] <veverak> rocket-league-syndrom
[16:10:06] <veverak> taht sounds cool
[16:10:08] <veverak> ;)
[16:11:09] <[VEGETA]> whah...? no
[16:18:20] <[VEGETA]> you are trolling right now xD
[16:25:09] <[VEGETA]> gonna talk tomorrow
[16:25:15] <[VEGETA]> salam
[16:26:37] <SpeedEvil> http://i.imgur.com/63a1lWx.gifv - another fun reason why vision is hard
[16:29:45] <k_j> hi
[16:30:00] <k_j> do servos keep sending pulses if they are already in the positions?
[16:30:14] <veverak> servos never send pulses?
[16:30:24] <veverak> I mean, if we are talking about RC PWM servos
[16:30:25] <k_j> servo drivers
[16:31:33] <k_j> veverak, i meant servo drivers
[16:31:42] <veverak> depends on the servo driver
[16:31:49] <veverak> k_j: anyway, rc servos doesn't have feedback
[16:31:55] <veverak> so dirver doesn't really know where servo is
[16:32:00] <veverak> can't detect if it arrived at position
[16:32:03] <veverak> :)
[16:32:16] <k_j> ok,so it needs to send the pulses relative to the commanded position
[16:32:27] <veverak> nope, pulses are absolute
[16:32:44] <Anniepoo> each pulse sent by controller is a separate command.
[16:32:44] <veverak> as long as pulses are send -> servo tries to get to position in pulse
[16:32:58] <veverak> that simple pretty much
[16:33:08] <Anniepoo> the pulse is nn microseconds wide, that corresponds to mm degrees
[16:33:13] <veverak> if you stop sending signals -> motor in servo "turns off" and you can move manually
[16:33:20] <veverak> (with the servo)
[16:34:37] <k_j> ok, so what is the rate, *when* is rate involved? when you want to send "more positions"? so you must wait for say, 20 ms, before sending a new one?
[16:35:09] <veverak> I thing yes
[16:35:12] <veverak> *think
[16:36:25] <k_j> i was under the impression that pulses (usually 1-2ms long) must be sent every 20 ms, also assuming the servos is already in the commanded position
[16:36:57] <veverak> nope
[16:37:18] <veverak> oh, yeah
[16:37:20] <veverak> it's 25 ms
[16:37:25] <veverak> was under idea that value is different
[16:37:38] <veverak> k_j: it's just that every 20ms you send pulse
[16:37:40] <k_j> wait, i am not following you now, "yeah" to what
[16:37:43] <k_j> ok
[16:37:48] <veverak> servo tries to get into that poisiton
[16:37:56] <veverak> and that's everything
[16:38:05] <veverak> you stop sending pulse -> servo stops trying
[16:38:42] <k_j> but what i do not understand is if "smart" servo drivers *keep* sending pulses every 20 ms
[16:38:58] <veverak> usually
[16:39:00] <veverak> :)
[16:39:04] <veverak> arduino servo class does that
[16:39:14] <veverak> afaik for example pololu maestro does the same
[16:39:47] <k_j> isn't a waste of power?
[16:40:21] <veverak> propably?
[16:40:24] <veverak> depends on your use case
[16:40:29] <veverak> it may or may not be what you want
[16:40:44] <veverak> in case of arduino for example, you can program it in a way that this doesn't happens
[16:40:53] <k_j> interesting
[16:41:00] <veverak> again, depends on use case
[16:41:16] <k_j> is the arduino driver smater than the pololu?
[16:41:29] <veverak> for example for like... "waving flag" you propably needs just to move servo to desired position and propably stop sending the pulse
[16:41:38] <veverak> on the other hand, if servo is controlling front wheels of RC Car
[16:41:47] <veverak> you don't want it to stop trying to get to desired position :)
[16:41:54] <veverak> k_j: arduino is programmable :)
[16:42:01] <veverak> is as smart as you code it to be :)
[16:42:03] <k_j> right, it look like the maestro only allows me the second case
[16:43:14] <veverak> k_j: what do you want to do?
[16:43:17] <veverak> ;)
[16:43:25] <k_j> another thing i do not understand is why it's so hard to find detailed specs about the minimum pulse width and max pulse with, rep. rate, max rotation
[16:43:53] <veverak> bad google skills?
[16:43:56] <k_j> veverak, no real applications in mind, just a moving arm for now, it's a new world for me, i am trying to understand the basics
[16:44:01] <veverak> https://en.wikipedia.org/wiki/Servo_control
[16:44:25] <k_j> thanks
[16:44:37] <veverak> k_j: usually min -> 1.0 ms, max -> 2.0 ms
[16:44:43] <veverak> depends on the servo though
[16:44:53] <veverak> towerpro 9g servos ten'ds to be min -> 0.8ms, max -> 2.2ms?
[16:44:56] <veverak> :)
[16:45:21] <veverak> servos are usuall min -> 0 degree, max -> 180 degree, but again, depends on specific servo, can vary a bit
[16:45:33] <veverak> even servo "resolution" can vary
[16:45:42] <veverak> cheap servos are able to make 256 steps per their full range
[16:45:43] <k_j> yes, the ones i have here only allow 90 degrees
[16:45:48] <veverak> better ones go up to 4096 steps/range
[16:45:50] <veverak> :)
[16:46:40] <k_j> can you suggest a very good servo , for 180 degrees and reasonably fast?
[16:46:48] <veverak> dunno
[16:46:56] <veverak> theeeere are much more parameters to consider :)
[16:47:21] <veverak> ranger/speed/strength/material_of_cogwheels
[16:47:34] <veverak> size
[16:47:37] <veverak> many others
[16:47:43] <veverak> k_j: I suggest to study more about what you want :D
[16:47:50] <k_j> indeed
[16:47:54] <veverak> eventually, find local RC_Shop in your town and go, take a look
[20:14:13] <rue_shop3> my 500 74HC595 arrived today, I'm happy.
[20:21:07] <rue_house> so did mine!
[20:39:22] <Tom_itx> rue_house i bet they screwed up rue_shop3's order and sent it to you instead
[21:46:37] <mrdata> \o/
[21:46:57] <mrdata> rue_shop3, what are you going to use them for?
[21:47:06] <mrdata> rue_house, ^
[21:47:38] <mrdata> i used a few of these to make a pseudo-random generator
[21:49:30] <Polymorphism> https://www.youtube.com/watch?v=dsSo5qOGk6c
[23:23:58] <wildmage> Tom_itx, how's my channel?
[23:39:50] <Tom_itx> umm what channel?
[23:40:08] <Tom_itx> oh, going nicely i believe