#linuxcnc-devel | Logs for 2013-07-05

Back
[09:31:07] <mhaberler> jepler: looking into shutdown.. it just occurred to me that in userland flavors we dont have a module shutdown sequence like in kthreads (where module unload will call hal_exit())
[09:31:43] <mhaberler> in a single HAL instance like for 2.6 it doesnt matter, but in the future it will because resource deallocation will become more stringent in the cross-instance linking scenario
[09:32:11] <mhaberler> I thinking about adding an iterator to unload all loaded modules in sim_rtapi_app.cc on exit
[09:32:22] <jepler_> mhaberler: I have no problem with that
[09:33:31] <jepler_> mhaberler: In my view it can be done after getting rtos merged; I hate seeing yet one more thing that is really orthogonal being added to the todo list
[09:34:01] <jepler_> (I think the same thing about hal-as-hal-component fwiw)
[09:35:07] <mhaberler> well there needs to be a better handling for at least xenomai-user than a plain exit(), and once you touch it there's not much case for a half-baked approach
[09:35:35] <jepler_> If that's the case then I withdraw my objection
[09:35:47] <jepler_> I just really want the day to come that we can merge rtos
[09:35:55] <mhaberler> well the current handling is too dumb for merging..
[09:36:26] <mhaberler> btw your cargo-cult patch looks like an interesting startup option to me
[09:36:49] <mhaberler> since it forces behavior like idle=poll but can be turned on/off without reboot
[09:37:21] <jepler_> is that what it does? I made no attempt to understand it
[09:37:24] <mhaberler> yes
[09:37:49] <mhaberler> or so I understood reading up on it
[09:37:51] <jepler_> at one point I was operating under the assumption that there was *something* in cyclictest missing in rtapi_app to explain memleak's(?) reported bad rt performance in rtapi_app, so I started copying stuff
[09:38:01] <jepler_> but if that is what it does -- yeah that sounds like a good idea
[09:38:14] <jepler_> maybe write a better change message about it then
[09:39:09] <jepler_> I hate vintage C++ compilers
[09:39:23] * jepler_ is experiencing the joys of $DAY_JOB and gcc 4.1.1.
[09:39:45] <mhaberler> are you on probation ;-?
[09:39:59] <jepler_> because of reasons this is the compiler we use to target 32-bit windows
[09:40:03] <mhaberler> btw.. we could just as well warp ahead to c99 from c90
[09:40:13] <jepler_> in linuxcnc? I would not mind.
[09:42:00] <mhaberler> re cargo-cult patch.. in userland the place to apply it would be an rtapi_app command and a matching halcmd to drive it; in kthreads - not sure; a sleeping shell script doesnt strike me as very elegant
[09:42:34] <jepler_> hold on, let me find something I read about a kernel API
[09:42:46] <mhaberler> you mean in-kernel? ha.
[09:44:16] <jepler_> have a look at kernel Documentation/power/pm_qos_interface.txt
[09:44:42] <jepler_> there are some kernel APIs, pm_qos_add_request / remove_request
[09:46:25] <mhaberler> aja, saw that one
[09:46:39] <mhaberler> this hints at the poll equivalent behaviour: http://pm-blog.yarda.eu/2011/10/deeper-c-states-and-increased-latency.html
[09:47:13] <jepler_> http://www.breakage.org/2012/11/processor-max_cstate-intel_idle-max_cstate-and-devcpu_dma_latency/
[09:47:25] <mhaberler> as a side effect, the power consumption (and heat dissipation) is lower when linuxcnc isnt running, so it's a 'green patch' (BS bingo alert)
[09:49:17] <mhaberler> the previous link also shows the high spike (58ms) when coming out of C6, which I suspice being the smps regulator rampup time
[09:50:03] <jepler_> that reminds me I was going to read the implementation of mlockall
[09:53:07] <mhaberler> hm, that pmqos-static.py script looks like it would be suitable for automatic latency testing against various cstates, giving an indication if cstates could be a problem in the setup
[09:57:44] <mhaberler> allowing to distinguish high latency cases from say graphics driver issues
[09:59:16] <seb_kuzminsky> pcw_home: did you see the 7i43/7i39 firmware kerfluffle on emc-users? might be a good oportunity to test the new firmware building buildbot
[10:08:01] <pcw_home> Yes I will try to help Seb when he gets a chance (Need to sort the 'standard' configs from the oddballs and probably work out some build issues)
[10:09:05] <pcw_home> Somehow I though you were skunkworks which is a little weird...
[10:09:37] <seb_kuzminsky> heh
[10:09:49] <seb_kuzminsky> we look a lot alike, especially on irc
[10:10:12] <seb_kuzminsky> couple S'es, couple K's ...
[10:10:30] <pcw_home> :-)
[10:13:03] <pcw_home> so whats the best way to proceed on this? Should l I try adding the new sources to the build scripts?
[10:13:04] <pcw_home> I do need some sorting of one-off configs from generally useful ones
[10:14:20] <seb_kuzminsky> brb
[10:18:04] <skunkworks> heh
[10:19:22] <skunkworks> seb_kuzminsky: did you see.. http://www.electronicsam.com/images/emco/emco.JPG
[10:19:24] * skunkworks can'
[10:19:34] * skunkworks t wait to play with them
[10:21:36] <pcw_home> Those are the ones with the funny latched interface?
[10:23:48] <skunkworks> yes
[10:24:25] <seb_kuzminsky> skunkworks: sweet!
[10:24:34] <skunkworks> well 5 of them. the compaq5 cnc actually has a controller in it. My mom was wondering why I didn't want the 'big one' ;)
[10:25:42] <pcw_home> Ahh the yellow one with the buttons/LEDs
[10:25:43] <pcw_home> I wonder if the latch is so the data pins can be used as inputs
[10:25:59] <skunkworks> yes
[10:26:33] <skunkworks> pcw_home: the comment was - there was a dongle in series with the printer cable. maybe so the dongle could be read.
[10:26:59] <pcw_home> so you would need a comp or something to do the outputs and read the inputs
[10:27:43] <seb_kuzminsky> pcw_home: i'm not sure yet how one-off firmwares would fit into the build infrastructure
[10:27:55] <skunkworks> I don't think the inputs are clocked - only the outputs
[10:28:13] <seb_kuzminsky> we currently build non-release branches of linuxcnc on the regular buildbot, maybe something like that could work for firmwares too
[10:28:28] <skunkworks> (the step/dir stuff)
[10:29:28] <pcw_home> Oh I see they did that so they could read the dongle live
[10:29:34] <skunkworks> that is the thought.
[10:29:59] <pcw_home> ding-dong the dongle is dead...
[10:30:22] <skunkworks> yay!
[10:30:35] <seb_kuzminsky> pcw_home: if i were you i think i'd want almost every firmware to be part of the normal build - seems easier for users
[10:30:45] <skunkworks> maybe sunday I can try to run one on linuxcnc.
[10:31:33] <skunkworks> jepler had a good idea of using the inverted printer port reset option to send a clock every base period/
[10:31:41] <pcw_home> Yeah and keeps everything up-to-date (as long as builds are just triggred by source changes)
[10:32:00] <seb_kuzminsky> pcw_home: right
[10:32:14] * skunkworks will let you guys get to work! :)
[10:32:19] <pcw_home> Yes that would do without hacking the boards
[10:32:45] <pcw_home> bbl time to walk Charlie
[10:33:20] <seb_kuzminsky> seeya
[10:33:30] <skunkworks> yes - this way there would be an option to run the existing electronics with no mods at all. That might be exciting for some people.
[10:33:48] <skunkworks> plug and play with linuxcnc - yay!
[10:34:06] <pcw_home> Especially if they had and used the original software
[10:34:30] <skunkworks> (they are not the highest performance 72 step/rev - the original control did like 30ipm max)
[10:34:44] <skunkworks> right
[10:34:49] <pcw_home> OK Ill clean up the source before the weekend
[10:35:05] <pcw_home> but cute...
[10:42:38] <jepler_> so it looks like mlockall *is* supposed to fail with ENOMEM if a count of all pages to lock exceeds the lock limit
[10:43:05] <jepler_> but actual errors in do_mlock_pages and mlock_fixup are dropped on the flor
[10:43:08] <jepler_> +o
[10:43:47] <jepler_> but the main case in which that would happen (racing mlockall in one thread with allocation in a second thread) doesn't apply to us
[10:51:34] <zultron> Dang, skunkworks, what will you do with all those? And what's that little CNC mill hiding in the corner?
[10:52:36] <zultron> Share the wealth already. ;)
[11:19:26] <seb_kuzminsky> hey zultron
[11:22:14] <zultron> Hi, seb_kuzminsky !
[13:28:48] <skunkworks> zultron: heh - it is a small xyz mill training mill
[13:33:33] <zultron> It's cute. Almost would fit in your pocket. :)
[13:39:11] <zultron> jepler, we talked sometime back about the issues surrounding keeping kmodules in /lib/modules/<kver>, and you (I think) said a previously-encountered problem was the modules being loaded automatically by the kernel, which in effect starts the realtime system.
[13:40:12] <zultron> If that's not desirable (I don't know if it is or not), why is the realtime script in /etc/init.d, which implies to me that the realtime system *should* be started at system boot time?
[13:48:31] <jepler> zultron: no idea
[13:49:02] <jepler> it sure could be moved to /usr/share/linuxcnc on first glance
[13:52:08] <zultron> Alright. Or ${prefix}/bin?
[13:52:33] <jepler> my first impulse is to say that it's not for running directly
[13:52:40] <jepler> halrun and linuxcnc scripts run it for you
[13:52:49] <jepler> but I'm open to the alternate viewpoint too
[13:53:13] <zultron> Ah hah, then ${libexec}, maybe?
[13:53:28] <jepler> ok
[13:53:40] <zultron> I think you're right, no need to put it in everyone's $PATH.
[13:53:46] <jepler> it's a shell script but its contents might be arch dependent so libexec sounds good
[13:55:58] <zultron> http://www.gnu.org/prep/standards/html_node/Directory-Variables.html
[13:56:20] <zultron> Look for 'libexecdir' on that link. Seems to match your description.
[13:58:27] <jepler> the folks who originally did the NURBS implementation have contacted me with an updated version including smarter subdivision to arc primitives and compatibility with fanuc nurbs codes. Unfortunately, they supplied whole files rather than patches and didn't specify what their starting version was
[13:58:45] <jepler> I would be happy to give a copy of their file to anyone with the interest in looking at it with the view to getting it in master branch
[13:59:00] <zultron> Hmm, lots of stuff could go in there, if that's what we decided. rtapi_app, linuxcnc_module_helper, flavor....
[13:59:39] <jepler> yes there's a lot of stuff that should be moved to more appropriate locations
[13:59:53] <jepler> I welcome any effort you make to approach that goal
[14:00:10] <jepler> .. in fact I tossed the file I was mailed online: http://emergent.unpythonic.net/files/sandbox/Nurbs_G6_2.rar
[14:00:10] <zultron> I'm not qualified to decide for many of those things, but I'll be happy to work on it.
[14:39:46] <seb_kuzminsky> jepler: you should point them to that wiki page you wrote saying how to contribute to linuxcnc ;-)
[17:08:41] -hobana.freenode.net:#linuxcnc-devel- [freenode-info] please register your nickname...don't forget to auto-identify! http://freenode.net/faq.shtml#nicksetup
[17:40:31] <linuxcnc-build> Hey! build checkin #1136 is complete: Success [3build successful]
[17:40:31] <linuxcnc-build> Build details are at http://buildbot.linuxcnc.org/buildbot/builders/checkin/builds/1136
[17:43:31] <seb_kuzminsky> more like it
[19:31:33] <KGB-linuxcnc> 03dgarrett 05master adc224f 06linuxcnc 10(6 files in 4 dirs) * pyngcgui fixes for deb packaging
[19:32:12] <linuxcnc-build> build #1143 of hardy-amd64-sim is complete: Failure [4failed git] Build details are at http://buildbot.linuxcnc.org/buildbot/builders/hardy-amd64-sim/builds/1143 blamelist: Dewey Garrett <dgarrett@panix.com>
[19:32:15] <linuxcnc-build> build #1139 of hardy-i386-realtime-rip is complete: Failure [4failed git] Build details are at http://buildbot.linuxcnc.org/buildbot/builders/hardy-i386-realtime-rip/builds/1139 blamelist: Dewey Garrett <dgarrett@panix.com>
[19:32:15] <linuxcnc-build> build #1141 of hardy-i386-sim is complete: Failure [4failed git] Build details are at http://buildbot.linuxcnc.org/buildbot/builders/hardy-i386-sim/builds/1141 blamelist: Dewey Garrett <dgarrett@panix.com>
[19:32:23] <linuxcnc-build> build #1139 of lucid-i386-realtime-rip is complete: Failure [4failed git] Build details are at http://buildbot.linuxcnc.org/buildbot/builders/lucid-i386-realtime-rip/builds/1139 blamelist: Dewey Garrett <dgarrett@panix.com>
[19:32:24] <linuxcnc-build> build #1138 of lucid-i386-sim is complete: Failure [4failed git] Build details are at http://buildbot.linuxcnc.org/buildbot/builders/lucid-i386-sim/builds/1138 blamelist: Dewey Garrett <dgarrett@panix.com>
[19:32:24] <linuxcnc-build> build #1140 of lucid-amd64-sim is complete: Failure [4failed git] Build details are at http://buildbot.linuxcnc.org/buildbot/builders/lucid-amd64-sim/builds/1140 blamelist: Dewey Garrett <dgarrett@panix.com>
[19:32:27] <linuxcnc-build> build #1138 of lucid-rtai-i386-clang is complete: Failure [4failed git] Build details are at http://buildbot.linuxcnc.org/buildbot/builders/lucid-rtai-i386-clang/builds/1138 blamelist: Dewey Garrett <dgarrett@panix.com>
[19:33:07] <cradek> !??
[19:34:29] <cradek> linuxcnc-build: force build --branch=master checkin
[19:34:34] <linuxcnc-build> The build has been queued, I'll give a shout when it starts
[19:37:26] <dgarr> hmm: fatal: unable to connect to git.linuxcnc.org
[20:47:55] <linuxcnc-build> build #1137 of checkin is complete: Failure [4failed] Build details are at http://buildbot.linuxcnc.org/buildbot/builders/checkin/builds/1137 blamelist: Dewey Garrett <dgarrett@panix.com>
[20:47:56] <linuxcnc-build> build forced [ETA 1h18m39s]
[20:47:56] <linuxcnc-build> I'll give a shout when the build finishes
[21:44:54] <seb_kuzminsky> prolly my home network on the blink again :-(
[22:34:56] <linuxcnc-build> Hey! build checkin #1138 is complete: Success [3build successful]
[22:34:57] <linuxcnc-build> Build details are at http://buildbot.linuxcnc.org/buildbot/builders/checkin/builds/1138
[23:03:28] <cradek> yay
[23:13:30] <seb_kuzminsky> so i think jeff figured out the linuxcncrsh test problem, using cradek's unbehiddenning insight
[23:14:18] <seb_kuzminsky> http://linuxcnc.mah.priv.at/irc/%23linuxcnc-devel/2013-07-03.html#14:27:33
[23:20:49] <seb_kuzminsky> the test.sh script writes "set home 0" to its socket, which goes to linuxcncrsh, which sends an NML command to task, which sends a message to motion, which is supposed to home and update the emcmotStatus flags to indicate that it's done
[23:21:22] <seb_kuzminsky> the emcmotStatus struct gets sent back from motion to task, and when that completes, task knows the joint is homed
[23:21:34] <seb_kuzminsky> until that happens, task considers the joint un-homed, and disallows mdi
[23:25:52] <seb_kuzminsky> i use 'set set_wait done' early in the test, which the linuxcncrsh docs say should wait until the command is complete before sending the next one to linuxcnc
[23:26:02] <seb_kuzminsky> i wonder if task reports the command complete when it sends it to motion?
[23:33:33] <seb_kuzminsky> status communication from motion to task looks pretty sketchy