#linuxcnc-devel Logs

Jan 18 2018

#linuxcnc-devel Calendar

07:23 AM skunkworks__: pcw_home, I had thought about something similar. Yours if more elegant. Are you saying adjusting the P-I values on the fly based on error?
07:32 AM sync: pcw_mesa: it might be easier to use a fixed lut for thc if a pid does not work well
07:52 AM skunkworks__: lut? for closed loop hight control?
07:57 AM sync: yes
07:58 AM sync: or just use that for feedforward
07:58 AM sync: it is how ECUs work
08:35 AM seb_kuzminsky: linuxcnc-build: force build --branch=2.7 0000.checkin
08:35 AM linuxcnc-build: build #5330 forced
08:35 AM linuxcnc-build: I'll give a shout when the build finishes
08:53 AM pcw_home: sync: yes you could use a large lut but PID/Lincurve doesn't need any new components
08:53 AM pcw_home: (plus you need integral term)
08:54 AM pcw_home: (lincurve is sort of a small lut)
10:09 AM linuxcnc-build: Hey! build 0000.checkin #5330 is complete: Success [3build successful]
10:09 AM linuxcnc-build: Build details are at http://buildbot.linuxcnc.org/buildbot/builders/0000.checkin/builds/5330
11:29 AM Tom_itx is now known as Tom_L
11:37 AM -!- #linuxcnc-devel mode set to +v by ChanServ
11:48 AM seb_kuzminsky: hmm, no github notification to the buildbot came from that push
11:49 AM seb_kuzminsky: linuxcnc-build: force build --branch=2.7-packaging-fixes 0000.checkin
11:49 AM linuxcnc-build: build forced [ETA 1h34m13s]
11:49 AM linuxcnc-build: I'll give a shout when the build finishes
11:54 AM linuxcnc-build: build #5331 of 0000.checkin is complete: Failure [4failed fetch branch to local git repo] Build details are at http://buildbot.linuxcnc.org/buildbot/builders/0000.checkin/builds/5331
11:58 AM seb_kuzminsky: weird
12:00 PM seb_kuzminsky: works for me everywhere else
12:00 PM seb_kuzminsky: linuxcnc-build: force build --branch=2.7-packaging-fixes 0000.checkin
12:00 PM linuxcnc-build: build forced [ETA 1h34m13s]
12:00 PM linuxcnc-build: I'll give a shout when the build finishes
12:01 PM seb_kuzminsky: and it worked just now
12:01 PM seb_kuzminsky: guess github glitched
01:26 PM seb_kuzminsky: cool, 2.7 builds & passes all the tests on buster, no changes at all needed
01:37 PM linuxcnc-build: Hey! build 0000.checkin #5332 is complete: Success [3build successful]
01:37 PM linuxcnc-build: Build details are at http://buildbot.linuxcnc.org/buildbot/builders/0000.checkin/builds/5332
03:15 PM seb_kuzminsky: hmm, but the buster rtpreempt kernel has crashed twice in the past 5 minutes...
03:18 PM seb_kuzminsky: i suppose it could be because my hypervisor is running lucid still...
03:49 PM sync: oh wow
03:49 PM sync: that is old
04:06 PM * seb_kuzminsky <-- lazy
04:19 PM sync: 8 years no update lazy? :D
04:28 PM rene-dev: pcw_home pcw_mesa whats your experience running multiple ethernet mesa cards from a switch?
04:28 PM rene-dev: I see the driver supports it, but are there any problems?
04:30 PM rene-dev: seb_kuzminsky have you had a chance to look at my PR? https://github.com/LinuxCNC/linuxcnc/pull/383
04:34 PM seb_kuzminsky: rene-dev: thanks for the reminder
04:35 PM pcw_mesa: rene-dev: It works OK if you have a fast host ( and a GigeE switch )
04:35 PM rene-dev: It might also be a problem in other configs, I didnt check all of them
04:35 PM rene-dev: pcw_mesa how many have you tried? well, its hard to get non-gige switches or nics these days...
04:35 PM pcw_mesa: the hal file needs to be organized so the read-requests are done first
04:36 PM seb_kuzminsky: rene-dev: i wonder why the scaling was there in the first place
04:36 PM seb_kuzminsky: seems useless
04:36 PM pcw_mesa: I did 4 at 1 KHz
04:36 PM rene-dev: yeah, of course. keen to try more? :)
04:36 PM rene-dev: seb_kuzminsky I have no idea.
04:37 PM rene-dev: so the switch itself is not an issue?
04:38 PM seb_kuzminsky: looks like a straightforward simplification, but i haven't tested it, i assume you have?
04:38 PM pcw_mesa: no it adds some store/forward latency but its not a big deal at 1 KHz
04:38 PM rene-dev: seb_kuzminsky the PR? yes, I tested that. just home, move and home again to test.
04:40 PM rene-dev: mesaflash seems to output some stats, on errors and lost packages, is that something useful to check for?
04:40 PM pcw_mesa: You can think of the switch as a concentrator for 100BT channels
04:41 PM pcw_mesa: yes though such errors seem very rare
04:41 PM seb_kuzminsky: rene-dev: better late than never, right? thanks for those simplifications
04:43 PM rene-dev: seb_kuzminsky thanks. when you are keen on testing stuff, try this :D https://github.com/LinuxCNC/linuxcnc/pull/354
04:44 PM rene-dev: #369 was fixed by someone else, so I closed it
04:46 PM rene-dev: pcw_mesa whats the scratch and debug stuff that it reports?
04:54 PM pcw_mesa: Not sure about debug, the scratch register is used by the driver I think (for write verification)
04:58 PM pcw_mesa: I should get the stored program stuff working so a single broadcast read request is sufficient for all devices
05:00 PM pcw_mesa: broadcast already works (V16 firmware and >)
05:00 PM rene-dev: is that planned?
05:07 PM rene-dev: if there is any documentation, I can look what it takes to implement that in the driver
05:08 PM rene-dev: hmm, other busses use a broadcast thing to sync
05:09 PM rene-dev: I guess thats not strictly necessary, but might get even better sync between cards
05:21 PM pcw_mesa: yeah theoretically the packets will be broadcast on all switch ports at very close to the same time
05:28 PM rene-dev: yes, but the write packages are send in sequence
05:47 PM pcw_mesa: right but typically write time is less important ( and could be synced by the DPLL at read time + X usec if needed )
05:50 PM seb_kuzminsky: as long as it avoids all the boards trying to write at the same time and colliding and having to back off and retransmit
06:00 PM pcw_mesa: dont think that happens with switches (per channel buffering)
06:01 PM pcw_mesa: unless you can overwhelm the 1G link to host (not likely)
06:11 PM rene-dev: they wont retransmit, its udp. udp just gets discarded
06:48 PM jepler: seb_kuzminsky: weirdly, increasing MaxRequestWorkers from 18 to 30 barely changed memory usage
07:33 PM jepler: but I can see via /server-status that the number of workers has increased
07:33 PM jepler: > 22 requests currently being processed, 7 idle workers
07:33 PM jepler: e.g., earlier
07:38 PM seb_kuzminsky: that's kinda weird
07:38 PM seb_kuzminsky: i guess they all share their text pages, but the html should still be private i'd think, since it comes out of a database instead of the filesystem
07:39 PM seb_kuzminsky: last time we had to wait for some kind of special condition to see the out-of-memory slowness
07:46 PM jepler: let me stress it...
07:46 PM jepler: load 12, 0.0% idle CPU ...
07:47 PM jepler: afterwords, 1361652 buff/cache
07:49 PM jepler: caused a visible blip in CPU and bandwidth in do control panel, but not in memory...
07:50 PM jepler: this was with 'ab -n 200 -c 30' from another digitalocean droplet in a different datacenter
10:43 PM seb_kuzminsky: hmm, github is not accepting my push
10:43 PM seb_kuzminsky: oh, there it goes