21:25:09 #startmeeting Dev Meeting, 12 March 2014, part 2 21:25:09 Meeting started Wed Mar 12 21:25:09 2014 UTC. The chair is gmcharlt. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:25:09 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:25:09 The meeting name has been set to 'dev_meeting__12_march_2014__part_2' 21:25:27 #info Chris Cormack 21:25:28 yeah, the first meeting was the best ever - with lots of cheese in it ;-) 21:25:40 #info Mark Tompsett 21:25:45 #info Magnus Enger, Oslo Public Library 21:25:50 Don't mention the bonus cheese for those who attend both. ;) 21:26:12 #info Thomas Dukleth, Agogme, New York City 21:26:23 #link http://wiki.koha-community.org/wiki/Developers_IRC_Meeting,_March_12_2014 Agenda 21:26:32 #info gmcharlt = Galen Charlton, 3.16 RM 21:27:02 #link http://meetings.koha-community.org/2014/koha_dev_meeting__12_march_2014_part1.html Minutes of first part of meeting 21:28:27 #info Robin Sheat, Catalyst IT 21:29:03 #topic Performance - Plack 21:29:18 at the first part there was some discussion about work on Plack at the hackfest 21:29:46 #info Jared Camins-Esakov, C & P Bibliography Services 21:30:01 and I believe dpavlin and ashimema will be trying to organize a consolidation of the various threads of Plack support 21:30:23 some stuff I know folks are aware of and have patches for are the Zebra reconnection issues 21:31:09 I, hopefully notfoolheartedly, announced in-core Plack configs for both intranet and OPAC as a goal for the 3.16 release 21:31:14 #info Liz Rea, Catalyst IT 21:31:21 comments, particularly from Catalyst? 21:31:41 We have one site in production with plack, minor issues have shown up, but nothing world-ending. 21:32:08 yeah issues have gone now we have switched to just making a z3950 connection per search and not trying to reuse it 21:32:16 OPAC-only or with staff as well 21:32:17 (which is what happens under cgi anyway) 21:32:20 OPAC only 21:32:34 Though, there is a bigger issue when it comes to having many sites on one server using it: that won't work very well atm, as there's no ability to pool resources between them. 21:32:43 Solutions to this are possible, I just haven't had a chance to look. 21:32:59 even two seems dodgy. 21:33:00 OK, well that keeps in the realm of experimental 21:33:06 * magnuse hands eythian a bunch of rund tuits 21:33:27 my goal is mostly to have Plack be reasonably functional enough for dev installations that more devs will actually use it that way 21:33:42 yeah, there's no reason that won't work pretty well on the OPAC side. 21:33:47 * magnuse thinks that is an excellent plan 21:33:58 We're hoping to look into the staff client side some time soon. 21:34:09 eythian: wht resurces would yu want to pool? 21:34:14 i will attempt to signoff 7844 tonight if someone else doesn't beat me to it. 21:34:15 magnuse: memory 21:34:30 out of curiousity... if it does get into 3.16 experimentally... does that mean the next release it could go production, or will there be more time than that? 21:34:48 magnuse: the current plack system pre-allocates everything, and doesn't "bubble". 21:35:07 So, if you have 5 hosts on it, you use 5 times the RAM as when you have one, even when idle 21:35:22 so if one site gets busy, there's a whole lot of wasted memory you can't use. 21:35:48 mtompset: no set schedule - it will be ready for production when we think there are no more evil bugs lurking 21:35:53 eythian: ah, thanks 21:36:08 (because each site has its own plack instance in order to do UID separation properly.) 21:36:24 * mtompset mutters something about version numbers. ;) 21:36:36 eythian: is that a problem with plack or with koha? 21:36:36 eythian: is the it to which you are referring Plack or Zebra? 21:36:47 magnuse: mostly neither 21:37:24 thd: *So, if you have 5 hosts on one server 21:37:28 you meant that one? 21:38:01 Just as a comment for anyone who isn't familiar with that part of the code, if C4::Context's context switching were to be resurrected and finetuned, Plack could probably run multiple instances from one process, but you'd lose the user separation, which would be bad. 21:38:08 magnuse: if I were to fix it, I'd fix it in plack: allowing it to say "keep 3 around, but go up to 10 if needed and then back down to 3 when thigns are quiet." 21:38:22 this may be possible using a different thing than starman. 21:38:41 eythian: sounds like a good thing to have, yes 21:38:54 (that is how apache works, fwiw) 21:39:03 eythian: Yes, multiple hosts per server with an ambiguous 'it'. 21:43:03 eythian: sounds like a problem that needs to be solved before we see widespread adoption of plack among vendors... 21:43:12 yup 21:43:13 (in productin) 21:43:30 the instance we have running it has its own server for unrelated reasons. 21:43:37 * magnuse looks quizzically at the "o" key on his keyboard 21:44:20 and once I did some benchmarking and tuning, making sure it can't run out of ram due to leaks, it's been handling load marvelously. 21:44:43 OK, great 21:44:46 moving on 21:44:51 but that should not stop us from testing things with plack, using the stuff from bug 7844 21:44:51 04Bug http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=7844 enhancement, P5 - low, ---, dpavlin, Needs Signoff , plack scripts for developers 21:44:56 #topic Performance and QA 21:45:08 #info Question: Should performance be an issue during the QA process? 21:45:24 two proposals were made at the first part for policies 21:45:31 first, for QA policy - http://paste.koha-community.org/158 21:45:49 second, for an addition to the coding guidelines, http://paste.koha-community.org/160 21:47:01 feedback? objections? +1s? 21:47:04 That seems reasonable. We do have some bad database latency issues as it is, it'd be good to not have any more. 21:48:05 * magnuse waves to cait 21:48:17 :) 21:48:23 the others are still out 21:48:30 +1 21:48:50 I voted last time, don't want to double vote. :) 21:48:59 we had couscous and tajine? tonight 21:49:08 oh meeting time? 21:49:08 meeting time is always going to favour one section of the globe 21:49:30 I would like to suggest an amendment. 21:49:34 +1 - we should definitely pay close attention to performance in circ. 21:49:49 #info Katrin Fischer, BSZ Germany 21:50:22 qam in da house! 21:51:08 thd: yes? 21:51:49 Perhaps the policy should acknowledge the possibility that some possible useful features would intrinsically degrade performance and thus may be required to have a system preference to enable or disable them. 21:52:15 I'd also like to include database accesses in the coding guidelines, that's where a _large_ proportion of time is spent at the moment. 21:54:17 thd: -1 to that amendment -- the text as written is loose enough to cover that possiblity, but I don't think it wise to open the door to either "turn this system preference on to make your Koha catalog slow" or "let's syspref away a bad initial implementation of a feature" 21:56:13 I know in the first meeting there were talks of profiling and perhaps even having some process flag things that generate a 5% slow down. 21:56:15 eythian: I don't think database access per se needs to be optimized for -- sure, a lot of the time a good way to avoid latency is to avoid reading from the database when you can read from a faster cache, but that strikes me as an implementation detail, not a matter for a broad guideline 21:56:46 gmcharlt: I'd put in the same bucket as bandwidth, storage, and memory. 21:56:50 mtompset: yep, there was general agreement that some sort of automated performance testing would be a nice thing to set up 21:58:03 gmcharlt: the issue being that currently an opac-search causes 1,718 database hits, which starts to be counted in actual seconds. 21:58:20 (for example) 21:58:32 1718?! 21:59:13 http://debian.koha-community.org/~robin/opac-search/ <-- mtompset 21:59:22 N+1 select problem 21:59:36 eythian: we don't get billed for database accesses, just time, as it were 22:00:01 well, it's specifically a performance issue. 22:00:11 and the heading is "Performance" :) 22:00:16 I am not disagreeing that a useful strategy would be to reduce the number of queries made, through various means, but I do not see it as a primary thing that we're trying to reduce or conseve 22:00:33 fair enough. 22:00:41 though it is the biggest single performance killer :) 22:03:03 #agreed http://paste.koha-community.org/158 is accepted as a QA guideline 22:03:22 #agreed http://paste.koha-community.org/160 is accepted as a new coding gudeline 22:04:17 any new items folks want to discuss? 22:05:02 elastic search integration is making progress 22:05:23 it works, but it can't yet do anything particularly advanced. 22:05:59 yay for progress 22:06:06 jcamins said i might start testing that stuff. 22:06:51 testing is probably a bit of a strong word at the moment, but you're welcome to have a poke at it. 22:08:11 that is a java/lucene based index engine. so would that be something we would begin to require java jvm, or would it be optional 22:08:33 you shouldn't run ES on the same server you're running koha. 22:08:55 a production ES installation requires a minimum of 3 servers in a cluster for data integrity. 22:09:12 So, you would have your ES servers and point to them from Koha. 22:09:31 (I don't expect ES to be a requirement any time in the near future anyway though.) 22:09:32 and zebra should remain as an option as long as there is no z39.50/SRU based on ES, methinks. 22:09:38 yup, that too 22:10:43 eythian++ # thanks for the update 22:10:48 anything else? 22:10:48 anything else is a guess 22:10:56 wajasu: forget anything else 22:10:58 gmcharlt: I forgot anything else 22:10:58 in other news, look out for bug 11926 22:10:58 04Bug http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=11926 enhancement, P5 - low, ---, wizzyrea, NEW , Render community koha statistic usages 22:11:18 ohh 22:11:20 we might try to have it signed off and qa'ed before the end of the week... 22:11:31 I want to have realtime stats pushed to logstash one day 22:11:42 ah, that's a different thing, ignore me 22:12:01 no. I'm just all in that search code and understand many issues now. 22:12:10 wasn't there discussion of piwik or something on the devel list a while back? 22:13:53 * mtompset goes hunting for the email. 22:14:24 yeah, paul_p talked about having one central piwik install that any koh site could use, but i *think* it was abandoned, because it would be too much data 22:14:50 plus its already been done 22:15:03 http://piwik.koha-community.org/ 22:15:07 there is now a basic dancer app to collect data: https://github.com/clrh/hea-app 22:15:14 koha-community.org has been using it for years 22:15:19 rangi: ah, kewl 22:15:23 and a couple of other koha 22:16:35 can someone who really knows what it is put a description on bug 11926 and the wiki page? It doesn't actually say much about what it is going to do/be. 22:16:35 Isn't just a matter of putting some js in a system preference to use it? 22:16:36 04Bug http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=11926 enhancement, P5 - low, ---, wizzyrea, NEW , Render community koha statistic usages 22:16:36 should the schema get regenerated? 22:16:55 Project Koha_master build #1662: SUCCESS in 1 hr 59 min: http://jenkins.koha-community.org/job/Koha_master/1662/ 22:16:57 * Fridolin Somers: Bug 11845 - set overlay and import status translatable in addorderiso2709.tt 22:16:58 * Owen Leonard: Bug 10415 - Add course reserves to staff client home page 22:16:58 04Bug http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=11845 minor, P5 - low, ---, koha-bugs, Pushed to Master , set overlay and import status translatable in addorderiso2709.tt 22:16:59 04Bug http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=10415 normal, P5 - low, ---, oleonard, Pushed to Master , Add course reserves to staff client home page 22:17:51 eythian: the idea is to enable koha to report stats to the dancer app, like number of records/items, settings for some interesting sysprefs etc 22:18:18 * magnuse has not been involved in the work, but heard a prsentation from the people who have this afternoon 22:21:12 * gmcharlt agrees with eythian that more detail would be nice 22:21:13 the wiki page does sum it up ,i think 22:21:33 it doesn't if you have no context. 22:21:40 ^^ 22:21:55 I'm pretty baffled too, tbh. 22:22:44 so, there will be a syspref: do not share data/share data/share data anonymously 22:22:54 something like "a central repository for anonymous worldwide koha usage" would be nice, if that's what it is. 22:23:06 if sharing is enabled, every month koha will push data to the dancer app 22:23:15 that's what it is 22:23:32 to be able to know how many libraries use koha, and their sizes 22:24:23 it was also a goal to report on the setting of some sysprefs, to be able to tell how koha is used, if there are features that no-one uses etc 22:24:39 also the versions that are in use 22:24:56 but it has to be off by default 22:25:07 yup, of course 22:25:08 * mtompset agrees with rangi. "Off by default." 22:25:09 so why would someone go turn it on? 22:25:13 or even know too? 22:25:16 and an option to shar anonymously 22:25:31 Perhaps something in the about? 22:25:49 for the same reason they register on the wiki or in web-lib-cats? 22:25:50 this feels like something that should be a plugin 22:25:59 ^^ 22:26:01 it seems that it would be better emailing, properly configured servers shouldn't be allowing port 80 access to the outside world anyway. 22:26:09 and that 22:26:26 um, a plugin woul hardly increase use? 22:26:41 cool, add your thoughts to the bug or the wiki page folks! 22:26:43 :-) 22:27:42 naw i dont think i will 22:27:54 rangi: remember bug 6293? ;-) 22:27:54 04Bug http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=6293 enhancement, P5 - low, ---, paul.poulain, RESOLVED DUPLICATE, Add a button to the intranet for registering with the Koha community 22:28:37 yep totally different thing 22:28:40 it doesn't continuously collect data 22:28:54 a button that takes you off to a form to fill in somewhere 22:28:57 and it's just a button, "report in" 22:29:43 plus it was a dumb idea then too 22:29:48 i have plenty of those 22:30:10 lol 22:30:46 ok, i'll shut up now 22:31:04 :) 22:31:23 any other topics (briefly?) 22:32:41 ok, thanks everybody 22:32:43 #endmeeting