21:01:00 <gmcharlt> #startmeeting Koha Dev Meeting, 25 February 2014 21:00 UTC
21:01:00 <huginn`> Meeting started Tue Feb 25 21:01:00 2014 UTC.  The chair is gmcharlt. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:01:00 <huginn`> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:01:00 <huginn`> The meeting name has been set to 'koha_dev_meeting__25_february_2014_21_00_utc'
21:01:12 <gmcharlt> #link http://wiki.koha-community.org/wiki/Developers_IRC_Meeting,_February_25,_2014
21:01:22 <gmcharlt> #topic Introductions
21:01:23 <wahanui> #info wahanui, a bot that has become sentient
21:01:36 <mtompset> #info Mark Tompsett
21:01:43 <gmcharlt> #info gmcharlt = Galen Charlton, 3.16 RM, Equinox, USA
21:01:48 <eythian_> #info Robin Sheat, Catalyst IT, Wellington, NZ
21:01:55 <JesseM> #info Jesse Maseto - ByWater
21:02:03 <rangi> #info Chris Cormack, Catalyst IT, Wellington, NZ
21:02:15 <thd> #info Thomas Dukleth, Agogme, New York City
21:03:26 <ashimema> #info Martin Renvoize, PTFS-Europe, UK
21:03:26 <cait> #info Katrin Fischer, BSZ
21:04:27 <gmcharlt> ok, I'm going to recap announcements from the previous meeting
21:04:30 <gmcharlt> #topic Announcements
21:04:41 <gmcharlt> #info gmcharlt will be clearing the passed QA queue prior to the beginning of the hackfest in Marseille
21:04:50 <gmcharlt> #info end of tomorrow will be cutoff for new passed QA until the hackfest in Marseille, QA team to focus on sign offs and kitten rescue
21:04:54 <rangi> cool
21:05:02 <gmcharlt> #info the hackfest in Marseille is 10-14 March
21:05:03 <wahanui> i already had it that way, gmcharlt.
21:05:14 <jcamins> #info Jared Camins-Esakov, C & P Bibliography Services
21:05:15 <mtompset> -- kitten rescue?
21:05:29 <gmcharlt> mtompset: resucing patches in failed QA or does not aply status
21:05:48 <mtompset> thank you.
21:05:55 <ashimema> everybody love saving kitten mtompset
21:05:56 <gmcharlt> any other announcements folks care to make?
21:07:01 <gmcharlt> ok
21:07:03 <rangi> ill cover mine when we get to elasticsearch
21:07:08 <gmcharlt> #topic RM questions
21:07:33 <gmcharlt> I'm passing over the UNIMARC questions and repeating this one:
21:07:34 <gmcharlt> is there a reasonable Plack configuration that we could include in 3.16, and encourage as a first-class install option?
21:07:37 <gmcharlt> #info is there a reasonable Plack configuration that we could include in 3.16, and encourage as a first-class install option?
21:08:08 <ashimema> I looked over my notes for bug 9316 as promised at the earlier meeting.
21:08:09 <huginn`> 04Bug http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=9316 enhancement, P5 - low, ---, kyle, Needs Signoff , Add Nginx install options with plack
21:08:50 <ashimema> It's not in bad shape really.. probably a worthwhile one to work through for getting plack easily usable by all
21:09:11 <eythian_> from a packaging point of view, it's close but there is one major issue with it that I haven't tried to solve yet, and that's that it won't do shared sites well at all at the moment.
21:09:33 <ashimema> it was, however, mostly aimed at making plack more easily installable and therfore testable.. I don't know if it's the 'best' way of doing it, or how it plays with packages.
21:09:52 <gmcharlt> another issue that rangi has mentioned, but which I'm not sure is in a bug yet, is management of connections to Zebra
21:09:55 <eythian_> i.e. if you have 10 workers sitting around, and 10 sites on there, that's going to end up with 100 workers running, and no memory.
21:10:17 <gmcharlt> in particular, zebrasrv will time out connections, but there's apparently no good way for the ZOOM API to detect that
21:10:27 <gmcharlt> so users can run into a backend that has lost its Zebra connectin
21:10:36 <rangi> thats exactly it
21:10:44 <ashimema> ooh. not thought of either of those points.
21:10:49 <eythian_> we do have a handy script that'll turn a regular package installation into a plack installation.
21:11:12 <ashimema> cool.. eythian.. any chance of sharing that?
21:11:19 <rangi> only the opac tho
21:11:20 <ashimema> would hapily help with testing here.
21:11:46 <eythian_> https://github.com/mkfifo/koha-gitify <-- ashimema
21:11:51 <eythian_> no wait
21:11:56 <eythian_> hmm, not sure where it lives.
21:12:02 <rangi> on git.catalyst
21:12:11 <rangi> 2 secs
21:12:14 <ashimema> cheers.
21:12:21 <eythian_> http://git.catalyst.net.nz/gw?p=koha-plack.git
21:12:23 <eythian_> ^-- there
21:12:28 <rangi> thats it
21:13:21 <rangi> however for the search to work consistently
21:13:26 <rangi> you need something like this
21:13:29 <rangi> http://git.catalyst.net.nz/gw?p=koha.git;a=commitdiff;h=459c750e4b0aa0fe5dba601e423e78070383b97b;hp=2c9581f75fdb1f76e60c83f86e253cb4dfe04597
21:14:16 <rangi> (but nicer obviously)
21:14:21 <mtompset> Hmmm... that reminds me of a patch I recently signed off.
21:14:23 <eythian_> Ideally we'd build the proxy into the install.
21:14:26 <mtompset> or read.
21:15:18 <ashimema> so.. in short it needs a bit more thought and testing
21:15:24 <rangi> yes
21:15:38 <rangi> the opac, apart from that problem works well
21:15:40 <gmcharlt> that confirms my suspicions
21:15:44 <eythian_> but when it works, it works really nicely.
21:15:45 <rangi> the intranet needs a ton more testing
21:15:54 <ashimema> I thought paul_p had it running somewhere.. surprised he's not pointed out the search issues
21:16:12 <cait> hm he said something about having to restart it or something?
21:16:21 <ashimema> yeah.. good point cait.
21:16:23 <cait> also the problem is that a single library will never use all features
21:16:34 <cait> circ might work well, while some tool doesn't
21:16:55 <thd`> To what extent is work on elastic search being shared between Koha and Evergreen?
21:16:56 <wajasu> has anyone though of using systemd to manage start/restart/stop of servers.  i believe systemd can be a watchdog and restart crashed services.
21:17:07 <rangi> thd`: we arent even up to that yet
21:17:21 <thd`> sorry, I was disconnected
21:17:32 <ashimema> indeed.. I was hoping to use a few of our more friendly customers as ginny pigs
21:17:38 <eythian_> wajasu: the problem is mostly memory leaks and unreleased resources, I think.
21:17:40 <gmcharlt> replacing Evergreen's search engine is not on anybody's radar to my knowledge
21:17:43 <rangi> wajasu: its not a stopped/crashed server
21:17:46 <rangi> its a timed out connection
21:17:55 <wajasu> oh
21:17:59 <rangi> zebra very very very rarely crashes
21:18:12 <rangi> the ZOOM api has no way of knowing if a connection is live
21:18:19 <rangi> in fact z3950
21:18:33 <rangi> is not designed to be used in the way we use it
21:18:59 <rangi> however, 99% of the world try to do the same thing we do
21:19:08 <rangi> which is why yaz-proxy and things like it exist
21:19:17 <rangi> and why LOC use yaz-proxy in front of their ILS
21:19:19 <gmcharlt> indeed
21:19:35 <gmcharlt> I think this is a reasonable seque to...
21:19:39 <gmcharlt> #topic Elastic Search
21:19:44 * ashimema got left behind at the mention of yaz-proxy...
21:19:49 <gmcharlt> rangi: can you give a summary of where Catalyst is with it?
21:20:02 <thd`> rangi: In what way do you mean not designed for our use case?
21:20:04 <rangi> we have one client running the opac in production
21:20:21 <gmcharlt> ashimema: (briefly, yaz-proxy in front of Zebra will take care of trying and reopening the Zebra connect if it times out)
21:20:27 <rangi> with the above commit, to make it reconnect to zebra each search (exactly what happens under cgi)
21:20:41 <ashimema> thought that might be the case.. cheers gmcharlt
21:20:53 <rangi> it works great, and has stopped the site dying under big load which it used to do each month
21:21:06 <rangi> (when they publish a newsletter that pounded the opac to death)
21:21:16 <eythian_> yeah, it barely broke a sweat with thousands of requests in a short space of time
21:21:21 <ashimema> I see, a worthwhile note..
21:21:22 <rangi> we are planning to do a lot more testing on the intranet
21:21:25 <eythian_> normally it'd be all OOM then die
21:21:28 <rangi> yep
21:21:51 <rangi> id happily run the opac under plack, with the ZConn fix
21:21:56 <eythian_> (I did spend time tuning it to ensure it couldn't OOM under plack too, that's necessary but not hard._
21:21:57 <eythian_> )
21:22:00 <rangi> in production
21:22:19 <rangi> thd`: holding connections open
21:22:56 <ashimema> so.. may worth working that installer patch out such that it cleanly adds the workaround so we can get more sites testing rigorously.
21:23:13 <rangi> but it would be better to get yaz-proxy in there, which will be useful under cgi too
21:23:16 * ashimema adds mental note
21:23:21 <thd`> rangi: If you do not hold connections open with Z39.50 your request fails.
21:23:36 * ashimema apologises for holding up conversation on ElasticSearch..
21:23:42 <eythian_> there's no reason we couldn't add it to the package install, perhaps as an option, aside from someone taking the time to do it.
21:23:49 <rangi> *nod*
21:24:10 <gmcharlt> ok, dragging us back on topic, please...
21:24:13 <cait> ashimema: things are clearer for me now too - so thx for holding up
21:24:16 <cait> :)
21:25:06 <rangi> right elastic search
21:25:44 <rangi> we have indexing (when a biblio is modified) and a basic search going
21:26:06 <rangi> eythian has done work on an iterator for biblios to allow for a batch index (amongst others)
21:26:33 <eythian_> still not ready, but getting there.
21:26:36 <rangi> next on the cards is a browse search, whcih is one of hte requirements of the funding institution
21:27:28 <rangi> after that, intranet search
21:27:53 <gmcharlt> browse in this case meaning what, exactly? bib headings, authority headings, or both?
21:27:54 <thd`> rangi: How is browse search envisioned by the client?
21:28:01 <rangi> then time permitting, create a fully js search page, (since elasticsearch hands back JSON)
21:28:19 <rangi> gmcharlt: kinda both
21:28:28 <cait> rangi: what does basic search include?
21:28:32 <rangi> thd`: a library browse .. so something no one else on the planet would understand
21:28:41 <rangi> but apparently is vitally important to librarians
21:29:10 <ashimema> lol, I've never really understood what librarians mean by 'browse search'
21:29:12 <rangi> cait: just author, keyword, title at the moment
21:29:25 <cait> that's not too bad :)
21:29:42 <eythian_> once the fundamentals are in place, adding more is easy.
21:29:47 <rangi> we aim to have a demo site up for playing with at the hackfest
21:29:52 <gmcharlt> WIP can be viewed where?
21:29:55 <cait> mostly alphabetic index i think - libraries here are asking for that as well
21:29:56 <thd`> rangi: Does that mean browsing the semantic space of classification or subject headings hierarchically?
21:29:56 <rangi> yep
21:30:09 <rangi> http://git.catalyst.net.nz/gw?p=koha.git;a=shortlog;h=refs/heads/elastic_search
21:30:21 <rangi> thd`: no idea, im not gonna reread the spec thing now
21:30:32 <thd`> ;)
21:30:43 <rangi> blah blah millenium blah blah horizon .. blah blah
21:30:46 <rangi> thats what it said to me
21:30:50 <ashimema> silly question time.. how does it actually fit together in terms of side by side with zebra, drop in replacement.. does your work do anything to abstract so we could use variose search backends?
21:31:04 <rangi> im hoping we can do something that humans will actually understand though :)
21:31:22 <rangi> ashimema: its working side by side
21:31:35 <rangi> if you switch the syspref it indexes to elasticsearch TOO
21:31:45 <rangi> ie its still indexing zebra in the background
21:32:04 <rangi> so you can have elasticsearch on the opac and zebra on the back (like now)
21:32:22 <ashimema> I see..
21:32:27 <rangi> it might be a switch, when the intranet searches work with it too
21:32:56 <ashimema> so keeping zebra for z39.50 (SRU) support, but using ES for OPAC and eventually intranet search
21:33:05 <rangi> eventually yeah
21:33:12 <thd`> rangi: Do you mean the your implementation is merely complementary to what Zebra provides because we have tied so much functionality to Zebra?
21:33:13 <ashimema> coolios.
21:33:34 <rangi> thd`: i dont want to write a z3950 or sru/sw server
21:33:58 <rangi> is the short answer
21:34:16 <thd`> rangi: of course, but I meant update triggers for the database etc.
21:34:17 <ashimema> z3950/SRU is what zebra is built for.. we may as well keep using it for that ;)
21:34:24 <ashimema> at least in the medium term.
21:34:30 <rangi> yeah, kaizen :)
21:34:52 <rangi> i figure small incremental improvements are easier to test, and less likely to bustinate everything
21:35:32 <rangi> the indexer is very simple, at the moment is marc only, but would be able to be other documents with minor changes
21:35:33 <ashimema> any comments on abstraction rangi..  is it likely we'll be ableto use this work to 'plug in' other indexers in the future via writing a 'driver'..
21:35:39 <rangi> probably not
21:36:01 <rangi> i think that abstraction can actually make it more obtuse and slow
21:36:17 <ashimema> fair point.
21:36:18 <rangi> http://git.catalyst.net.nz/gw?p=koha.git;a=blob;f=Koha/ElasticSearch/Indexer.pm;h=d7dd8dc011fa2d09fc452aa8e29351815f30dc1d;hb=dcb4ab577d305ec415d06b84a0eca7d58ea685b4  <-- indexer
21:36:19 <thd`> gmcharlt: At the last Kohacon hackfest the issue of some shared work between Koha and Evergreen in relation to search was discussed.
21:36:49 <gmcharlt> rangi: ashimema: well that's a relief -- I was worried that we'd never have anything to talk about on koha-devel
21:36:57 <rangi> heh
21:37:01 <gmcharlt> now I know we're set for life! ;)
21:37:20 <rangi> we could make Koha::Indexer
21:37:22 <gmcharlt> thd`: the context was most likely QueryParser
21:37:48 <rangi> which abstracts over Koha::ElasticSearch::Indexer
21:37:50 <gmcharlt> I repeat, I know of no serious thoughts about changing Evergreen's search engine at present
21:38:03 <ashimema> QueryParser was going to be my next question.. in that. how does it fit in in the scheme of things with ES?
21:38:08 <thd`> gmcharlt: Was the consideration of shared work merely for the user interface parser to which any backend system could be used?
21:38:22 <rangi> id like that to abstract over Koha::ElasticSearch::Search
21:38:43 <rangi> http://git.catalyst.net.nz/gw?p=koha.git;a=blob;f=Koha/ElasticSearch/Search.pm;h=2efb53e04ab5e3b01958009ae925f3e1530cce51;hb=dcb4ab577d305ec415d06b84a0eca7d58ea685b4 <-- so tiny and cute
21:39:09 <gmcharlt> well, in QP-speak it woudl be more likely a driver that translates to ES queries
21:39:14 <rangi> yeah that :)
21:39:33 <ashimema> yeah.. that's what I meant..
21:39:36 <ashimema> ;)
21:39:44 <ashimema> thanks gmcharlt
21:39:51 <gmcharlt> but unless you are about to tell me that ES uses ASN.666/BER, we're probably OK ;)
21:40:04 <rangi> nope
21:40:12 <rangi> its super simple
21:40:50 <cait> so we could use query parser to translate searches into elastic search searches?
21:40:51 * ashimema clueless again :$
21:40:56 * cait tries to follow
21:41:03 <gmcharlt> #link http://git.catalyst.net.nz/gw?p=koha.git;a=shortlog;h=refs/heads/elastic_search
21:41:03 <rangi> yes thats the plan
21:42:04 <rangi> http://search.cpan.org/~drtech/ElasticSearch-0.66/lib/ElasticSearch.pm#Query_methods
21:42:09 <rangi> if you are interested
21:42:47 <rangi> #info we plan to have a demo in time for the hackfest
21:42:53 <cait> very cool
21:42:57 <cait> totally curious :)
21:43:45 <rangi> its a well documented module, which is very handy
21:44:46 <rangi> thats about all i have
21:45:02 <cait> rangi++ eythian++
21:45:11 <ashimema> ++
21:45:24 <rangi> i feel like it will make life a lot better for the future and help with our move from MARC
21:45:33 * ashimema best go read the manual
21:45:39 <bag> here - sorry to be late
21:45:49 <gmcharlt> OK, next topic
21:45:54 <mtompset> actually... how does all this search stuff related to facets showing or not showing?
21:46:02 <gmcharlt> #topic DBIx::Class
21:46:20 <bag> ah I have something to add about Zebra - when it's back to that subject
21:46:33 <gmcharlt> oh, OK :)
21:46:37 <gmcharlt> #topic Searching
21:46:41 <gmcharlt> bag go for it
21:46:45 <ashimema> :)
21:47:04 <bag> I've talked with gmcharlt and rangi about this.  but we've seen zebra fast indexing - when it's doing a merge
21:47:13 <bag> uses 100% of the I/O
21:47:20 <bag> just at that spot
21:47:31 <thd`> Even with MARC, anything which scales better is an advantage.
21:47:38 <bag> rangi or gmcharlt please feel free to say what I said more gracefully
21:47:52 <rangi> yeah, it becomes I/O bound on the merge step
21:48:03 <eythian_> a merge will use a ton of IO, because it's beating up on the disk.
21:48:04 <ashimema> yup.. we've seen that too bag
21:48:34 <gmcharlt> and at present nobody's yet dived deep enough into the code to see if it can be readily remedied
21:48:37 <bag> we've done some testing and can only confirm that if you have mysql on the same disk then everything freezes for a bit
21:48:53 <eythian_> a workaround would be to run it with ionice so that other things get priority.
21:49:01 <gmcharlt> indeed
21:49:04 <thd`> What are the conditions under which a merge happens with Zebra?
21:49:19 <gmcharlt> (and separating out I/O for DB vs. everything else is often a good idea for large installations)
21:49:28 <eythian_> or have your DB on another server. I've also been meaning to look into having zebra on its own server, but haven't really had the need.
21:49:45 <bag> thd`: fast indexing - once you are TRYing to add to an index that is already created
21:50:02 <rangi> chunking the merges into smaller bits may be a solution too
21:50:10 <rangi> but not sure
21:50:19 <wajasu> maybe some unix command like renice or ionice can bind the IO for that process. - just a guess
21:51:04 <bag> rangi we've seen it with merges as little of 100 records
21:51:09 <gmcharlt> well, if it turns out to help, ionice could be incorporated into the script that launches rebuild_zebra.pl easily enough
21:51:24 <rangi> bag: maybe not then, darn
21:51:35 <rangi> gmcharlt: thats definitely worth trying
21:51:51 <bag> good thought - let's get a bug for that
21:51:56 <rangi> bag++ #pie
21:51:59 <gmcharlt> bag: rangi: zebraidx may end up doing index rebalancing regardless of the size of the merge, perhaps
21:52:02 <bag> heh
21:52:03 <eythian_> totally trivial to test, too.
21:52:06 <bag> gmcharlt++
21:52:14 <gmcharlt> bag++
21:52:17 <cait> bag++ gmcharlt++
21:52:28 <gmcharlt> of course, here's a crazy thought:
21:52:39 <gmcharlt> store the zebra files on a ramdisk
21:53:18 <bag> we've gotten around it by getting mysql on a different disk than zebra idx
21:53:19 <rangi> hmm thats not that crazy
21:53:20 <eythian_> could have problems when they get large, also having to do a full rebuild on boot could take a long time.
21:53:31 <bag> but I'm not sure everyone could afford such a thing
21:53:34 <rangi> yeah
21:53:35 <gmcharlt> eythian_: yeah, that's the obvious tradeoff
21:54:02 <gmcharlt> of course, one could s/ramdisk/SSD/, but that still involves expense
21:54:07 <rangi> you could index on another machine
21:54:08 <eythian_> I do think splitting the disks for the different things could be considered a best practice though.
21:54:09 <rangi> and rsync
21:54:09 <thd`> gmcharlt: If something like rebalancing is happening then the process should be batched to cron for late at night.
21:54:26 <rangi> that may still i/o bind you
21:54:28 <gmcharlt> indexing but once a data is a non-starter for general use
21:54:43 <gmcharlt> *once a day
21:54:49 <eythian_> thd`: then you don't have updates in near realtime
21:55:14 <bag> eythian_: best practice yes - but may not be possible for all people installing koha
21:55:27 <ashimema> I've dabbled with sticing indexes on ssd whilst everything else is on hdd.. it 'felt' quicker, but I ran out of time to really benchmark it properly
21:55:28 <thd`> eythian_: Yes but a non-thrashing system is important.
21:55:41 <eythian_> bag: yep. I'd slap an ionice in front of the rebuild command, see if that helps to start with.
21:55:46 <mtompset> bag++ #pie
21:55:53 <mtompset> gmcharlt++
21:56:19 <gmcharlt> bag: is there a bug yet?
21:56:36 <bag> I don't know gmcharlt
21:57:04 <gmcharlt> if not, please file one -- I can't think of a better place at moment to aggregate informatino about zebraidx I/O perofrmance
21:57:11 <bag> if I don't find it - I will create one
21:57:23 <gmcharlt> thanks
21:57:32 <gmcharlt> and now I really will change topics
21:57:36 <gmcharlt> #topic DBIx::Class
21:57:38 <bag> thanks
21:57:46 <gmcharlt> to summarize the discussion this morning
21:57:59 <gmcharlt> there's some pending disagreement about appropriate use of DBIC
21:58:44 <gmcharlt> with opinions essentially ranging between using the DBIC schema classes as is, and only adding additional layers of abstraction where absolutely needed -- representing bib records is one example
21:58:58 <gmcharlt> versus wrapping an layer over DBIC objects across the board
21:59:09 <gmcharlt> and enforcing non-DBIC in the .pl files
21:59:15 <gmcharlt> I hope I've represented the range fairly
21:59:47 <ashimema> that about covers the conversation from earlier gmcharlt
21:59:54 * cait agrees
21:59:57 <rangi> right
22:00:34 <gmcharlt> my personal view is the former
22:00:53 <gmcharlt> given that there's a lot of places that simply need to shovel data from DB to presentation
22:01:07 <wajasu> and i wanted to get tests written across the data access layer, possibly with NYTProf stats.
22:01:20 <gmcharlt> and other record types where only a few suplemntary methods would need to be added to the schema classes
22:01:49 <mtompset> Well, what was the purpose of bringing in the DBIC schema classes into Koha? Was it not DB-agnosticism?
22:01:50 <gmcharlt> I had made an action item for myself to write up some examples
22:01:55 <rangi> i agree
22:02:20 <gmcharlt> but I would appreciate if otehr folks would take up a small bit of functionality and also do some experimentation
22:02:43 <rangi> there is some in the elastic search stuff
22:03:10 <gmcharlt> one thing that seems reasonably clear to me, for example, is that adding some syntax sugar to concisely fetch a object of the appropriate schema type given a known idea would be nicer than Koha::Database->...->rs()
22:03:17 <rangi> yep
22:03:25 <wajasu> i did run Test:DBIx:Class::Schema  against your master last week and it all ran through.
22:03:37 <rangi> sweet
22:03:42 <rangi> http://blogs.perl.org/users/ovid/2014/02/using-dbixclassschemaloader-to-find-design-flaws.html
22:03:46 <rangi> this is worth reading too
22:03:51 <gmcharlt> mtompset: DB-agnosticism, entering the OO-age, and reducing the need for manually-written SQL
22:04:04 <gmcharlt> but none of that is affected by the current discussion about how to structure use of DBIC
22:04:11 <rangi> exacterly
22:04:55 <cait> i think examples would be great
22:04:56 * ashimema looks forward to some examples that fall into the two camps.
22:05:05 <ashimema> cait.. you beat me to it again.
22:05:10 <cait> it didn't look too had when you showed us a bit at kohacon... but i haven't had a chance to look at it since
22:05:19 <mtompset> cait++ # always on the ball.
22:05:20 <cait> too hard...
22:06:33 <gmcharlt> OK, moving on
22:06:38 <larryb> apologies for joining late, but I wanted to throw out a comment regarding the index rebuild disk IO problem
22:06:39 <gmcharlt> #topic Pending Large Enhancements
22:06:55 <gmcharlt> larryb: please hold a few minutes if you don't mind
22:07:00 <larryb> sure gmcharlt
22:07:26 <gmcharlt> by large enhancements, I'm basically looking for works-in-progress that are hoped to make it in for 3.16 or 3.18, but which are not necessarily visible enough
22:07:53 <gmcharlt> and which are large enough or world-changing enough that a lot of special work may be required of the testers
22:07:58 <gmcharlt> the ones I know of include:
22:08:01 <gmcharlt> - ES
22:08:07 <gmcharlt> - the new cataloging editor
22:08:10 <bag> pianohacker: you here?
22:08:11 * ashimema hides.. it's at this point I brought up the logging bug in the last meeting
22:08:20 <pianohacker> yo
22:08:27 <gmcharlt> - Joobu's column-management stuff
22:08:42 <rangi> im pretty sure i wrote a patch using log4perl that did that, but i cant remember
22:08:45 <bag> alright if peeps have questions for pianohacker's rancor for large enhancements
22:08:53 <rangi> heres something to look at
22:09:10 <gmcharlt> particularly given the recent discusssions on the mailing list about ES, I want to make sure that we don't have folks quietly working on stuff
22:09:16 <rangi> bag: for pianohacker http://holloway.github.io/doctored/
22:09:20 <jcamins> rangi: it's on the logging bug as an attachment.
22:09:25 * cait promises not to quietly work on world changing things :)
22:09:28 <ashimema> rangi: yeah, you did.. there's a passed qa bug that's really still in discussion.. hotely debated earlier
22:09:32 <rangi> click on document, then switch the schema to marc21
22:09:44 * bag has trouble being quiet :P
22:09:57 <francharb_afk> bye all
22:10:39 <eythian_> ashimema: what is the logging bug?
22:10:40 <bag> ah pianohacker you catch that link from rangi ?
22:10:40 <rangi> ive been working on NCIP stuff, its kinda aside to Koha
22:10:42 <ashimema> we're working on a refactoring of borrower_imports, ColinC and I are trying to get EDI back on the playing field and i'm not sure where housebound is at from our front..
22:10:51 <rangi> ie its not going to be committed to Koha
22:10:55 * ashimema thanks cait for reminding him about those
22:10:56 <pianohacker> rangi: yup, taking a look at it, very interesting
22:11:01 <ashimema> we're also working on ILL
22:11:27 <bag> ashimema: I'm very interested in the edi work - we've been using the stuff already submitted and not sure how much we've changed
22:11:38 <rangi> http://git.evergreen-ils.org/?p=working/NCIPServer.git;a=summary  <-- here
22:11:39 <rangi> and
22:11:42 <ashimema> eythian_: bug 8190
22:11:43 <huginn`> 04Bug http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=8190 enhancement, P5 - low, ---, jonathan.druart, Passed QA , Add a logging module to Koha
22:11:59 <rangi> http://git.catalyst.net.nz/gw?p=NCIPServer.git
22:12:04 <gmcharlt> ashimema: say more about ILL?
22:12:05 <eythian_> oh, I was thinking there was a bug in koha about logging :)
22:12:39 <ashimema> bag: I've spoken to khall about it.. we're on good ground with it at the moment between us.
22:12:53 <bag> awesome
22:12:58 <ashimema> should be a new patch shortly.. much cleaner and easier to maintain.
22:13:01 <bag> communication++
22:13:27 <cait> communication++
22:13:43 <ashimema> gmcharlt: ILL is not my baby i'm afraid.. colinc and mark have been working on it.. I believe it's currenty going through some refactoring to meet 3.16 guidlines
22:13:50 <ashimema> communication++
22:14:14 <gmcharlt> ashimema: pointers to more information would be great, if you wouldn't mind poking them
22:14:31 <pianohacker> I can make a patch series fairly soon, but I'm curious as to what sort of functionality would be necessary for a first series; it can save to the catalog, edit existing editors, search, etc.
22:14:48 <ashimema> i've also got a rotating collections (of european style) under development.. but that won't be ready for a while...
22:14:53 <gmcharlt> it edits editors? wow! ;)
22:14:56 <ashimema> it's more thought than code at the mo.
22:15:08 <ashimema> i'll poke gmcharlt
22:15:23 <pianohacker> oh dangit
22:15:33 <pianohacker> *edit existing gmcharlts
22:15:44 <cait> pianohacker: waht i wondered - is rancor next to the existing interface for now or trying to replace?
22:15:53 <gmcharlt> pianohacker: permission denied
22:15:54 <gmcharlt> ;)
22:15:59 <bag> pianohacker: you are editing gmcharlt
22:16:05 <pianohacker> sudo rancor gmcharlt
22:16:24 <gmcharlt> pianohacker: the functions you've outlined (save to catalog, edit exiting records, and seach) sound plenty for a first cut
22:16:37 <ashimema> not 100% sure how current this is.. but colins ILL branch is at: https://github.com/colinsc/koha/tree/ill_wip
22:16:42 <pianohacker> cait: Stay next to; the old editor is intended to stay around as a basic editor
22:16:45 <gmcharlt> I *really* want something out there for folks to be actively testing outside of your demo environemtn
22:16:50 <cait> pianohacker: glad to hear that :)
22:16:55 <ashimema> it's been on hold for a few months with other priorities taking hold
22:16:56 <cait> we have small libraries not used to marc21
22:17:11 <bag> cool pianohacker submit some basics :)
22:17:18 <pianohacker> Yes, definitely. I'm working on polishing up the fixed field stuff a bit then I'll send it out
22:17:33 <bag> before marseille hackfest?
22:17:46 <cait> if it is next to existing, it could maybe be experimental
22:17:51 <cait> and if it's not messing up data :)
22:18:08 <bag> yes don't submit anything that messes up data
22:18:13 <bag> :)
22:18:18 <cait> :)
22:18:21 <pianohacker> yes, either this week or early next week
22:18:29 <bag> thanks :)
22:18:49 <gmcharlt> great
22:19:01 <gmcharlt> OK
22:19:02 <pianohacker> And it _should_ round trip cleanly, doesn't try to do anything fancy with whitespace or anything like that, and should be UTF-8 clean; are there any other likely issues?
22:19:29 <gmcharlt> pianohacker: eh, making sure it doesn't trim leading/trailing whitespace unexpectedly
22:19:37 <cait> pianohacker: translations? *hides*
22:19:38 <gmcharlt> e.g., from the fixed fields and the 010$a
22:19:47 <gmcharlt> ah, yes - i18n
22:19:54 <bag> cait let's talk about that and work on that at the hackfest
22:19:55 <pianohacker> gmcharlt: Yup, the LCCN was one of my concerns
22:20:08 <pianohacker> cait: I'm working hard to make it translatable, don't worry :)
22:20:13 <gmcharlt> #topic Next meeting
22:20:25 <bag> cait everything I've seen so far - isn't bad for translations
22:20:31 <gmcharlt> tenatively, a follow-up meeting has been agreed to
22:20:45 <gmcharlt> two-part like today, at 15UTC/21UTC on 12 March 2014
22:20:50 <gmcharlt> +1/-1 ?
22:20:51 <wahanui> -1
22:21:29 <wajasu> we also mentioned in prior meeting for ICU to maybe be the default for zebra.
22:21:53 <gmcharlt> right
22:21:55 <rangi> +1
22:22:17 <pianohacker> +1
22:22:19 <cait> +1 # does it count again?
22:22:25 <eythian_> +1
22:22:30 <gmcharlt> cait: stop stuffing the ballot box ;)
22:22:47 <cait> sorry....:)
22:22:55 <gmcharlt> but seriously
22:22:57 <gmcharlt> #agreed Next dev meeting will be at 15UTC/21UTC on 12 March 2014 (pending confirmation from the second half of today's meeting)
22:23:00 <bag> +1  and good timing - during the hackfest
22:23:03 <gmcharlt> er
22:23:14 <gmcharlt> #agreed Next dev meeting will be at 15UTC/21UTC on 12 March 2014 (achivement unlocked: confirmation!)
22:23:25 <gmcharlt> #endmeeting