User talk:Ahasuerus/Archive/2017

From ISFDB
< User talk:Ahasuerus‎ | Archive
Revision as of 19:09, 13 January 2018 by Ahasuerus (talk | contribs) (Archived 2017 Talk)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

NONFICTION in a magazine

See what I just found here. I've already submitted a change from NONFICTION to an ESSAY for the offending title but isn't there a report to catch things like that? :) Annie 01:56, 5 January 2017 (UTC)

There is a cleanup report which looks for reference title type/publication type mismatches. However, I don't think it knows about non-fiction titles, which are borderline container titles. Sort of. I'll take a closer look -- thanks for catching the problem! Ahasuerus 01:58, 5 January 2017 (UTC)
Well - they are containers because they cannot be inside of other books besides omnibuses (I am not even sure you can do that) - same treatment as NOVEL, isn't it - not a container but not an internal title either. But the container here is MAGAZINE, not a NONFICTION. Annie 02:06, 5 January 2017 (UTC)

Pseudonyums languages

Can we have a script that sets a pseudonym language to the main author language when that is already set? A lot of these popup unassigned and all they need is "check the main author and assign the same language". Thanks! Annie 18:59, 5 January 2017 (UTC)

Please go ahead and create an FR. I think it should be workable. I'll take a closer look when I feel better. Ahasuerus 00:32, 6 January 2017 (UTC)

interior artwork as translation

From Bilbon viimeinen laulu: Bilbon viimeinen laulu • interior artwork by Pauline Baynes (trans. of Bilbo's Last Song 1990)

I think listing an artwork variant as a translation is odd (and gets right into Stonecreek's objections on the Community Portal‎). I recommend ignoring the language for artwork and listing it simply as a variant (the way it would be displayed if the languages were the same). Thanks. -- JLaTondre (talk) 23:12, 5 January 2017 (UTC)

It is easy to do, but I think it would be best if we reached a consensus re: art and languages before we made any changes. At the rate we are going, all of our titles will have a language code assigned within the next 4-6 weeks, which should significantly reduce the amount of time that we spend on languages and (hopefully) clarify the situation going forward. Ahasuerus 00:36, 6 January 2017 (UTC)
I like that idea a lot - it keeps the work pages clean but does not lose the language designation on the works themselves.Annie 17:58, 8 January 2017 (UTC)
I'm afraid that this proposal would lose the language designation (as it'd be correct in my opinion): who would have done the translation of the cover art (or did Pauline Baynes repaint her artwork in Finnish)? Stonecreek 04:46, 9 January 2017 (UTC)
What are you talking about? There is no translator there now either. And as it is art, it is just a variant, not a translation. Annie 07:33, 9 January 2017 (UTC)
I am talking about the same statement about the interior artwork in the pub. record as JLaTondre: (trans. of Bilbo's Last Song 1990). Stonecreek 08:46, 9 January 2017 (UTC)
What JLaTondre is talking about is changing the way the interior art is listed on the publication page. The designation remains as is, it just does not say "translation" in the contents section. Annie 17:10, 9 January 2017 (UTC)
Oh, fine! That'd be nearly just as good as the real thing! Stonecreek 19:25, 9 January 2017 (UTC)
It looks like this Talk page discussion has reached a mini-consensus. If so, please post the proposal on the Community Portal so that we could gather additional feedback before we create an FR and implement it. Ahasuerus 22:39, 10 January 2017 (UTC)
Done Annie 00:08, 12 January 2017 (UTC)

Jalada 02

I've twice tried to import contents to the anthology Jalada 02 and both times the process broke down partway through [1] [2]. Can you find out what the problem is? --Vasha 15:54, 8 January 2017 (UTC)

Let me restore the latest backup on the development server and re-approve the submissions. Thanks for reporting the problem! Ahasuerus 16:03, 8 January 2017 (UTC)
Unfortunately, I was able to re-approve the 2 problem submissions without running into any issues on the development server. Perhaps the difference in behavior is explained by the fact that the development server is running a somewhat more recent version of the database software (due to technical issues.) I don't think there is much else we can do until we upgrade the main server :-( Ahasuerus 17:16, 8 January 2017 (UTC)
No problem -- I will just keep recreating the submission and trying again; it seems like 10-12 records get added before everything stops, so maybe all the rest will get there next time. --Vasha 17:24, 8 January 2017 (UTC)
Import them in batches - too many entries at the same time choke the server (from what I had seen and from a few other posts I had seen around). Annie 17:57, 8 January 2017 (UTC)
The XML parser that we use has a known memory leak problem, which may cause submissions to error out. However, it only seems to happen when there are hundreds of Contents items. These two submissions had a few dozen Contents items, so in theory they should have been safe. I suspect that they contained a special character which the live version of the database software couldn't handle. Ahasuerus 18:18, 8 January 2017 (UTC)
After examining one of the pubs closer, I discovered that the title "Sleep Naked" (now fixed) was author-less. That can only happen when the submission approval process errors out when processing the Contents item. It should narrow things down quite a bit. Ahasuerus 18:25, 8 January 2017 (UTC)
I have checked the newest iteration against the publication and it's all there and correct. Thanks! --Vasha 18:55, 8 January 2017 (UTC)

China Chinese vs. Taiwan Chinese

Hi. Would you be so kind as to visit this and comment on Karen's language distinction question? Thanks. --MartyD 03:02, 13 January 2017 (UTC)

Cover Art Modified Bug

There is a minor bug in the publication edit screen for the cover art section. If only the date is changed, it still shows the title and artist as having been changed. See this submission for an example. Only the date was changed, but it is showing all three cover art fields as changed whereas in the regular title section, it correctly only shows the date as changed. It's rather minor, but it would be nice when approving submissions to see only the changed field highlighted. Thanks. -- JLaTondre (talk) 16:35, 21 January 2017 (UTC)

Thanks, I'll take a look. Ahasuerus 17:35, 21 January 2017 (UTC)
Confirmed. Bug 642 has been created. Thanks again. Ahasuerus 22:30, 21 January 2017 (UTC)

Minor formatting: as by brackets

Hi, I just noticed that there's an extra space at the end of names in the "as by" brackets, like this: [as by Allen M. Steele ]. Now that I noticed it I can't stop seeing it! Can it be fixed? --Vasha 18:26, 28 January 2017 (UTC)

There are a few places where we display an extra space before commas and brackets, e.g. see this FR. At one point I was going to nix them. However, some editors pointed out that they relied on the extra spaces to facilitate copying and pasting data. We may want to get a Community Portal consensus before we change the software. Ahasuerus 18:39, 28 January 2017 (UTC)
Fixed! Ahasuerus 00:26, 8 February 2017 (UTC)
Good... but before commas separating author names it's still there. The editors of Pseudopod look like "Shawn Garrett , Alex Hofelich" --Vasha 00:30, 8 February 2017 (UTC)
Right, there is a separate FR to "Remove extra space between author names" created in mid-2013. Since no one objected to the proposed change, I will look into it next now that I have finished the monthly Fixer run. Ahasuerus 01:00, 8 February 2017 (UTC)
Fixed. Ahasuerus 21:05, 9 February 2017 (UTC)

Series:Childe Cycle

Series:Childe Cycle - so should that be fixed now, 10 years later? Or should I start a discussion first? Or give it more time - it had not been 10 years yet... 6 more weeks are needed :) Annie 20:16, 2 February 2017 (UTC)

Fixed -- thanks for finding it. The fact that Wiki pages are often forgotten or ignored is one of the reasons why I have been trying to get the Wiki migration project wrapped up sooner rather than later :-) Ahasuerus 21:35, 2 February 2017 (UTC)
Wiki cleanup - had been finding quite a lot amusing things... However I can either work on languages or on Fixer lists or on the wiki cleanup - I do alternate them but I kinda cannot do them at the same time - especially with all the build-in time waiting for moderators to approve submissions. So that shall be finished - in time. Deletion Candidates can use some attention though if you have the time. :) Annie 22:43, 2 February 2017 (UTC)
Sure, understood. And honestly, as I watch various "bad" numbers dropping across the board week in and week out, I can't help thinking that we are doing exceedingly well now that you are on board. At the rate we are going, there will be no language-less titles left in a matter of weeks, a huge accomplishment. And we have just a few hundred Wiki-based series and publisher pages left, which is eminently doable. Ahasuerus 01:01, 3 February 2017 (UTC)
Well, I am happy to help. About the wiki ones - about 100 less after today once someone go and delete them. The Publication ones are the ones that will take the longest from the wiki as there will be a lot of notifications to be posted all over the place. Together with the remaining fanzines, magazines and a few series that require adding more entries to the DB. A huge amount of the remaining publisher pages will remain as they are (too big or complicated to convert - and will get linked from their Publisher pages - I had been basically skipping those for the most part and one of those days I will run through them and do the updates. :) If someone downstream wants to clean those, they are welcome to - I am not moving pictures and heavily formatted long text over. Annie 01:37, 3 February 2017 (UTC)
Right. The original plan was to move everything to the database and that's what the first iteration of cleanup reports assumed. However, once the reports were generated, we quickly realized that some Wiki pages were much too complex to move to the database. In the end we decided to treat them like we treat third party sites and simply link to them.
Thanks for working on series. I will be reviewing the list of deletion candidates later today. Ahasuerus 18:04, 3 February 2017 (UTC)
Yeah, I know. I am bouncing between series and publishers - I do get bored now and then and need a change in what I am looking at :) Thanks for deleting those -- that would help me see what is still standing (I know if I go in some semblance of alphabetical order, it will make it easier to track but then I get bored :) )Annie 18:16, 3 February 2017 (UTC)

Story lengths 2016

Now that people are thinking about awards nominations and consulting the database for eligibility questions... I am trying to tidy up as much 2016 short fiction as possible. Could you pull me up a report where title type is SHORTFICTION, language is English, year is 2016, and the length is not specified? I won't be able to find all the lengths but quite a few. --Vasha 03:00, 3 February 2017 (UTC)

Sure, I will give it a shot tomorrow morning. Ahasuerus 05:27, 3 February 2017 (UTC)
We have 2,153 matching titles as of this morning. Would you like me to post them on your Talk page, one letter (A, B, C, etc) at a time? Ahasuerus 17:55, 3 February 2017 (UTC)
Actually, please sort them by author, that will make it easier to find the information for several at a time. Yes, posting on my talk page is fine. --Vasha 19:22, 3 February 2017 (UTC)
Sure, I can do that, although it won't be perfect since some stories have multiple authors. I expect to post a list of "A" stories later today. Ahasuerus 19:40, 3 February 2017 (UTC)
I have set up a special page to post them on. --Vasha 20:26, 3 February 2017 (UTC)
Done. Please let me know when you need the next letter. Ahasuerus 23:11, 3 February 2017 (UTC)
Ready for another letter now. --Vasha 03:48, 7 February 2017 (UTC)
Nice! "B" has been added. The page is getting longer, so I would recommend splitting it into letter-specific sub-pages. Ahasuerus

(unindent) C page is ready. I suggest regenerating the list because it will already have shrunk by more than half (I've gotten all the low-hanging fruit). --Vasha 16:49, 9 February 2017 (UTC)

These special lists/reports are generated anew when you ask for a new letter/whatever :) Annie 16:54, 9 February 2017 (UTC)
Done. And yes, ad hoc reports are generated on demand. They are run on the development server, so I need to refresh it with the latest production backup before each run. Ahasuerus 17:59, 9 February 2017 (UTC)
D page ready --Vasha 15:51, 11 February 2017 (UTC)
All yours! Ahasuerus 15:59, 11 February 2017 (UTC)
Please send me E, F, and G --Vasha 16:09, 13 February 2017 (UTC)
Done. Ahasuerus 17:23, 13 February 2017 (UTC)

(unindent) I took a break from chasing lengths to add all the anthologies and collections that were on the Stoker preliminary ballot, but now I'm done with that and will go back to lengths tomorrow -- so send me H through K please. --Vasha 23:12, 16 February 2017 (UTC)

Done. Oh, and in case you are going to tackle Aer-ki Jyr's "Star Force: Origin Series (Aer-ki Jyr)" series. A while back I added the first 26 novellas while processing robotic submissions. Then I ran out of steam, which is why the remaining collections are "empty". Ahasuerus 00:05, 17 February 2017 (UTC)
Eh, no, I'm not going to look at every one of those to check if it's novella-length or slightly under. There are limits to my dedication. --Vasha 00:43, 17 February 2017 (UTC)
I am reasonably sure that all of them are. Each trade paperback book contains 4 novellas and is 310-400+ pages long. Each page contains up to 400 words, so each work is in the 25,000-40,000 word range. Ahasuerus 00:56, 17 February 2017 (UTC)

(unindent)Please create the L-O list --Vasha 05:47, 24 February 2017 (UTC)

Done! Ahasuerus 15:29, 24 February 2017 (UTC)
Next-to-last section coming up: P-S. Thanks. --Vasha 19:14, 25 February 2017 (UTC)
No "Q"s, I am afraid, but lots and lots of "S"s! Ahasuerus 19:22, 25 February 2017 (UTC)
Send me T-Z please...
Done -- thanks for working on them! Ahasuerus 01:15, 1 March 2017 (UTC)

So that's taken care of. You can delete the pages I created: [3], [4], [5], [6], [7], [8], [9], [10]

There's only 500-odd stories that I couldn't find the length of; that's 6-7% of the total so I'm proud of it. And with that, Project Short Fiction 2016, which I started in early December, reaches an end. The objective was to add as much of the year's short fiction to the database as possible. To be sure, I could go on adding the contents of obscure anthologies and small magazines until infinity, but I think I've gotten to the point of being able to say "enough".

For 2017, I have two projects in mind: verify all the anthologies in my local libraries, and keep up with new short fiction month-by-month so there's not such a crunch at the end. --Vasha 02:28, 2 March 2017 (UTC)

Are you interested in setting up a page somewhere in the wiki for all the magazines/ezines and so on that are expected to have issues so they can be checked issue by issue and someone can help if they have a minute? It can also include any collection and anthology that is known I guess. Annie 02:33, 2 March 2017 (UTC)
That is a most excellent idea. I actually started gathering the information for what magazines exist and how often they update, and putting that on a public page is the right step. Would you be able to design it? Would have a list of magazines (with a handy link to each one's website) and a layout of their expected issues (with a space to add links, such as to the Amazon page for that issue when it exists, to that issue's web page, etc.) And some sort of completion code system, say numbers one through four. There are four stages as I see it. 1. Creation of a record for the issue. 2. Adding contents without story lengths. 3. With lengths. 4. Verified. --Vasha 02:49, 2 March 2017 (UTC)
I'll take a crack of putting something together and we will edit as needed if the format is not good enough. Probably over the weekend. I'll also see if I can find my old list of non-genre magazines that publish SF now and then - as these are sometimes really hard to find and I had a list once... somewhere :) Annie 02:54, 2 March 2017 (UTC)

Dates

Why do we date novel title records the date of the first complete-book publication, rather than the date it first appeared serialized in a magazine (in the case of an earlier serialization)? --MartyD 21:41, 4 February 2017 (UTC)

This Help page provides an explanation. My current take on it is that it would be better to address the issue in the software. Unfortunately, I haven't been able to come up with a design that would make everyone happy. Ahasuerus 23:04, 4 February 2017 (UTC)
There is also the issue of serialization online. The Hugos do consider web publication to be publication. --Vasha 00:05, 5 February 2017 (UTC)

Fantasy Book #4

Hi. On your PV1 Fantasy Book No.4 it shows two different covers, a b&w and a magenta. The indicia shows a regular and a deluxe edition available (with differing subscription rates for each). I have the magenta and it varies with the b&w in both content and the notes: There's no interior art in the magenta and the whole number _is_ listed on the cover. The page count is 68 (covers are included in the count). No price shown in mag. Should there be additional notes to cover this or should there be a separate edition of the mag? I've just noticed that Vol 1, No.2 has a deluxe edition separate from the regular. If you thought this should be done with No.4, would the b&w edition be the regular since it includes artwork, or do you have some other idea of which would be the deluxe? I'll refer RTrace over here for his input since he's also a PV. Thanks, Doug / Vornoff 19:22, 5 February 2017 (UTC)

There's some information in Tymn and Ashley that may be helpful. Their initial comment on format is "Bibliographically confusing". They go on to say there were actually three editions of each issue: subscriber's, collector's and newsstand. The edition can be identified by the quality of the paper and slight variations in size, though they don't detail these. They mention that the price for the collector's edition was 35¢ with the newsstand priced at 25¢. My copy has the same illustration that we currently have in black and white. However, it is actually in brown and green printed on yellow paper. Given that it has the 25¢ price, I assume that it's the newsstand edition. Curiously, the cover is actually a dust jacket over blank wraps. My understanding is that dust jackets on paperbacks was not unusual for FPCI. I'd be for splitting the editions out, and I can certainly provide a color scan of my copy. I'll wait until Ahasuerus mentions which edition he has before deciding how (or if) we agree to split these out. --Ron ~ RtraceTalk 21:17, 5 February 2017 (UTC)
Thanks for looking into this! Unfortunately, much of my collection is currently boxed up and I can't check this particular issue.
As far as splitting the publication record into 3 goes, it seems reasonable, especially since we already have Vol. 1, No. 2 entered that way. Keep in mind that it will require a few extra steps because you can't clone magazines at this time. One NewPub submission, one Title Merge submission and one Import Contents submission should do the trick. Ahasuerus 21:26, 5 February 2017 (UTC)
I would assume there would be no attribution to the newsstand edition, which would be consistent with how the issues now appear. That leaves the attribution for the "subscriber" and "collector" edition, as you state above, Ron. You have the attribution for Volume 1 No.2 as (deluxe edition) which doesn't match one of these but it _does_ match how one of the two types of editions are called in the indicia of the copies I have. Also, as to the magenta/pink non-priced non-illustrated edition of #4 that I have, I'm not sure whether that would be the "subscriber" or "collector" edition. If the choice were only two, i.e., "newsstand" and "deluxe", I would say it must be the "deluxe", except why would the deluxe version omit the illustrations?? In any case, if you made three editions, wouldn't you have to add TWO NewPub submissions, not one. Or am I not getting something here? Doug / Vornoff 18:33, 6 February 2017 (UTC)
Sorry for the delay in responding. I've been mulling over this the past couple of days. I've also read the full entry in Tymn and Ashley on Fantasy Book. The full entry mentions only regular and deluxe editions. The actual quote in the later format section is "up to three editions". Given that, I would suggest that we only detail regular (my copy) and deluxe edition (yours). Unless we obtain evidence of a further edition for a particular issue, I wouldn't suggest adding one. I will go ahead and upload a new cover illustration for my copy. I won't delete the existing "double" illustration in case you want to use it to extract the magenta cover. You can delete it when you don't need it any longer. I'll let you create the deluxe edition record and import the contents from the regular edition, unless you would like me to do it for you. Regarding your missing interior illustrations, I wonder if that is perhaps a binding error. I would recommend adding a note that the verified copy is missing the illustrations present in the regular edition. If a second verifier shows up for the deluxe edition, we may get a better idea on whether the illustrations should be there. Lastly, this assumes that Ahasuerus' copy is the regular edition. We may have to adjust the verifications once he is able to examine his copy. Thanks. --Ron ~ RtraceTalk 21:54, 8 February 2017 (UTC)
I've submitted the deluxe edition and will import titles when/if accepted. I'll scan my own cover and add it. If you don't mind, can you delete the old cover. I don't know what's involved in deleting a cover properly. I've added my own notes, revised from the notes now existing for the regular edition. Let me know if you think something should be changed there. Thanks, Doug / Vornoff 03:48, 10 February 2017 (UTC)
Happy to do so. There is a link to delete files on the image page (from the link in "Cover art supplied by ISFDB") under the file history. I assume that it is available to all since it's in the wiki, but it may only be there for moderators. Yours looks good. --Ron ~ RtraceTalk 12:52, 10 February 2017 (UTC)
Moderators only (same as with wiki pages deletion) unless if I am really blind today :) Annie 02:06, 11 February 2017 (UTC)

Bug: setting story lengths from Edit Publication

Today I was able to confirm something I'd been suspecting, when I found story lengths unset in publications I was sure I'd checked previously. When you go into Edit Publication mode and change story lengths of the white (editable) interior titles, the lengths of gray titles become set to unspecified! This doesn't affect anything except length of the gray titles. I don't know what would happen if you change something else about editable titles but not length.

So I hope you can fix this -- for now, of course, I am no longer editing interior contents from publication view.

Of course, over the past few days, before I realized this, I erased a lot of previously set lengths. I don't know how to fix it all. Arrrgh. --Vasha 20:30, 12 February 2017 (UTC)

Ouch! Sorry about introducing the bug when I implemented the last round of changes. I will try to recreate it on the development server and then fix it ASAP.
As far as restoring the accidentally erased values goes, I will compile a list of EditPub submissions which deleted the "length" value of a short fiction work. I will then restore an earlier backup and check what those values were. It may take some time, but it should be doable. Ahasuerus 22:44, 12 February 2017 (UTC)
I think I have been able to recreate the problem on the development server using your submission as an example. It's not the act of assigning a length value to an editable Content title that corrupts the uneditable data, it's the act of entering a page number for an uneditable Content title. I'll see if I can come up with a fix real quick. Ahasuerus 23:01, 12 February 2017 (UTC)
OK, I think I got it. Now to find the affected submissions and titles... Ahasuerus 01:17, 13 February 2017 (UTC)
That's great. I have to apologize, too; there's more corrupted data than there needed to be if I had made an effort to figure out what the problem was when I first started seeing things that made me say "???" last week. It took me a while to go from "I thought I set that value already, but maybe not; do it again and move on" to "Something's definitely wonky, and it's editing publications that causes it somehow." --Vasha 05:00, 13 February 2017 (UTC)
Unfortunately, wonky software behavior can he hard to notice/diagnose :-( Ahasuerus 17:32, 13 February 2017 (UTC)

(unindent) OK, we have 877 problematic submissions approved between 2017-01-24 and 2017-02-13. Some submissions affect more than one Content title, but there is also some overlap, so the final number is unknown at this time. It will take at least a few hours to write the requisite code and juggle various backup versions, but it should be possible to extract all the data that we need to repair the damage. Ahasuerus 20:24, 13 February 2017 (UTC)

The 877 "bad" submissions affected 1,610 existing title records. Out of that number, 820 were SHORTFICTION. Only 424 of them did not have a length value as of this morning, which is relatively good news.
Next I will restore the 2017-01-23 backup on the development server and check how many of the 424 titles had a "length" value on that day. Ahasuerus 22:31, 13 February 2017 (UTC)
OK, I have created ISFDB:Deleted length values cleanup. It lists all of the known titles whose length was deleted between 2017-01-23 and 2017-02-12 along with the original length. There are only 308 of them, so it should be manageable. Just to be on the safe side, I decided not to assign values programmatically to allow human review. I plan to start working on assigning values later tonight. Ahasuerus 23:54, 13 February 2017 (UTC)
On that list I see "Pale Kings and Princes" and "The Whitechapel Fiend". I deleted the length of those two on purpose because I'm not actually sure how long they are. I didn't recognize any of the other titles.
Seeing so many items in German reminds me of something I'd been meaning to ask about-- the ss/novelette/novella lengths really only apply to stories in English, don't they? After all the "same amount of story" in French would have 20% more words, and so on. You folks surely must have discussed that before. --Vasha 00:44, 14 February 2017 (UTC)
See this for example. It has a link to an older discussion. Annie 01:23, 14 February 2017 (UTC)
I have corrected Mad Scientist Journal, Winter 2017; the remaining unset story lengths in that issue are intentional. --Vasha 01:37, 14 February 2017 (UTC)
Thanks! Ahasuerus 02:16, 14 February 2017 (UTC)

(unindent) All done. Sorry about the hassle! Ahasuerus 19:57, 14 February 2017 (UTC)

Thanks for the fast repair! Stonecreek 20:03, 14 February 2017 (UTC)
It's the least I could do after making a mess of things :-) Ahasuerus 21:02, 14 February 2017 (UTC)

Report request

While working on the stories, I am finding a lot that have no publications and are not parents. Some of them are leftover variants that are obviously forgotten (I am deleting those) but some are valid stories from magazines we do not have for example. Can you get me a report of all stories that are not parents and that have no publications attached to them? I want to get them deleted where they are leftovers and to add to them notes on their publications for the ones that are legitimate stories that complete a bibliography Adam's Rib for example - this one has its note on publications but some do not (such as Additional Feghoot Puns.

Not urgent - I have at least a week more in language work (if not more for the internal art) but I'd like to do that at some point. Annie 21:31, 15 February 2017 (UTC)

There is a Titles without Pubs report which currently excludes ESSAYs, POEMs and SHORTFICTION. How about we include these three title types but make an exception for titles with a Note on file? Ahasuerus 21:53, 15 February 2017 (UTC)
Some of the notes need adjusting (I saw one saying that it was published in AHMM, I want to find out which issue and update it). So yes, expanding this report will be fine (although I'd had to redo a few art variants in the last few weeks because of a moderator deleting them overnight before I can connect the dots based on the report). So while I am sorting out languages, I like it not to have these types - saves me from redoing work - I know that I can add first, then remove and variant but I prefer to create variant and then import/export (this way the publication does not have the two language versions at the same time) :) After that - that would work as well I guess (then maybe you can dump a list somewhere of the ones WITH a note at the time when you implement so I can go through them). Or I can just ignore these (they have a note after all) and fix them when I see them.
So if we go that way, let me finish the languages first and then you can add the types (I think they need to be added in the long run anyway) and we will take it from there. Annie 22:01, 15 February 2017 (UTC)
Sounds like a plan -- thanks for thinking about these issues! Ahasuerus 22:07, 15 February 2017 (UTC)
Anything that will keep me from adding my own books... :) More seriously though - I just hate ratty data and if we are going to be the bibliography for the genre, we may as well do it properly. By the way did you see my note over in the Moderator's forum on the Chinese transcription records? Annie 22:12, 15 February 2017 (UTC)
Sorry, I forgot to respond yesterday night. I added my comments a few minutes ago. Ahasuerus 23:10, 15 February 2017 (UTC)
No worries and more responses in. Coming from a minority language and a minority writing system means I've been dealing with transliterations for most of my life (not to mention online...). Always fun. Annie 23:21, 15 February 2017 (UTC)
A side effect of the 3 types not being on the report is the number of stories I am now finding that actually need deletions - messed up variants, things like this one. Once languages are done, we definitely need them into the report... :) Annie 23:06, 16 February 2017 (UTC)
I think what happened here is that at one point we had a volunteer who was working on this cleanup report one title type at a time, just like you are now. He handled all the book length types, but he had reservations about shorter works. As I recall, his concern was that adding them to the report could prompt editors to delete some pub-less titles which were legitimately a part of the database. That's why the current plan is to limit this particular cleanup report to shorter titles that have neither pubs nor Notes associated with them. Ahasuerus 00:11, 17 February 2017 (UTC)
Note that it seems that I'm nearly the only person who works now on this cleanup report (and a bunch of others). I've already asked to have at least the SHORTFICTION included to work on the list but this was denied to me. It's quite unsettling to see that such a request is now granted "on the side" (don't know if it's an english expression).Hauck 07:20, 17 February 2017 (UTC)
Guessed as much - it made no sense to have the arts in but not the stories. Can we pull the two arts out from the report temporarily (so I can work on them in peace)? Then we will start adding them back one by one (after I convince you to first get me a list so I can add notes where needed and catch these legitimate cases but that is a different conversation - some of the ones with comment needs deletion so just a note is not good enough indicator). I am thinking on what would be one though... Annie 00:18, 17 February 2017 (UTC)
As a point of reference, here is how many "short" pub-less titles (regardless of the presence of Notes) we have as of this morning:
  • POEM: 164
  • ESSAY: 706
  • SHORTFICTION: 1319
If you'd like me to, I could create a Wiki page for each title type similar to the Wiki pages that I create for Vasha. It would let you review all of them before we narrow the list down to pub-less titles without Notes.
Yeah, that would be next on the list after I finish the languages, And the stories will be lower by the time I am done with the languages assignment for them - I am deleting all that needs deleting that I can easily spot :) Annie 01:21, 17 February 2017 (UTC)
Re: your request to "pull the two arts out from the report temporarily (so I can work on them in peace)", I am not sure I understand the intent. There are no pub-less INTERIORART or COVERART titles at this time, are there? Ahasuerus 01:05, 17 February 2017 (UTC)
No, there aren't. But while I am varianting and what's not (pulling different language versions separately), I create some occasionally - if I have a lot of notes to add to the new language variant, I add a variant first so I put the notes in and then import it and remove the existing one inside. But not always the same day as I need to wait for an approval. So sometimes I need to create them again because they get deleted overnight as they show up on the report :) I am trying to change my process for these but it just is easier for me in some cases and then sometimes I completely forget that they may get deleted... So if we can suppress them temporarily when I tackle the two arts, that would help. Once I am done with the languages, they can come back in the report. If that would be too much of a problem, that's fine. I thought I would ask. Annie 01:21, 17 February 2017 (UTC)
Oh, I see. Sure, I can do that. Normally I try not to make temporary changes which will need to be undone at a later point -- in case I forget or suddenly become unavailable -- but I think this case merits an exception. Ahasuerus 02:09, 17 February 2017 (UTC)
Done. Ahasuerus 03:51, 17 February 2017 (UTC)
Thanks and I do understand your usual position on such requests so I appreciate the change. Annie 06:14, 17 February 2017 (UTC)
I'm quite saddened to see that such alterations to our cleanup reports and their logic are not brought to the attention of moderators, that their utility is only discussed here and that such changes are decided unilateraly. Note that the bad moderator that delete publess titles is me and that this bad moderator have proposed a solution to Annie here that tried to protect her work AND the integriry of the db. If you want me to stop modertaing and processing cleanup reports, just say so. Hauck 07:24, 17 February 2017 (UTC)
Herve, I never used the word "bad", neither implied that a moderator did anything wrong (or did not mean to anyway - if anything above sounds like that, it was not intentional and should be blamed on my English). Orphaned records like that need deletion. You did what should be done with this kind of orphaned entries, it just caught some of mine in the crossfire (I had been sure I was losing some edits for a while, this above was the first time I tracked down where). I was just looking for a way to get the cleaning of the languages finished without a need for either me to change my process (sometimes I believe that there is enough time for my updates to be approved and me to get the next ones in before the reports but things do not work out) or you to change your processing of cleanup reports - and save both of us some work - you of deleting, me of recreating. So I asked for them to be suppressed for a few weeks. If you think that this will cause an issue with the integrity of the site in those few weeks, then the change can be reversed. I am sorry if you felt that this request was in any way against you - as I said, I was just trying to find a way to lower the load to everyone... Annie 08:53, 17 February 2017 (UTC)
As for the cleanup reports - you are too fast - by the time I wake up, you are mostly done with them - I am trying to clean the ones I know enough of in the mornings but some are always done before that :) Annie 08:53, 17 February 2017 (UTC)
My take on it is that it's possible -- even likely -- that some truly orphan INTERIORART and/or COVERART records will be "invisible" for a few weeks. However, they will reappear on the cleanup report once the change has been reversed, at which point they will be available for cleanup. Moreover, the plan is to add the three missing "short" title types to this report once the language cleanup project has been completed. At the end of the day the report will be more comprehensive than it is now. A win-win situation! :-)
As far as notifying other moderators goes, I added a note to the report header, but I should have posted on the Moderator Noticeboard as well. Ahasuerus 15:23, 17 February 2017 (UTC)
Posted. Ahasuerus 15:35, 17 February 2017 (UTC)

Order for Used These Alternate Names

I've noticed that the list of Alternate Names on Author pages doesn't appear to be in any order, or maybe is just a first in-last out queue... Would it be difficult to sort them? Albinoflea 23:04, 17 February 2017 (UTC)

"Used These Alternate Names" and "Used As Alternate Name By" are currently ordered by each author's/pseudonym's last name. Would you prefer a different sorting algorithm? Ahasuerus 23:34, 17 February 2017 (UTC)
Oh. okay, I see it now, and the transliterated names are used for the sort when non-Latin variants are thrown into the mix. It just seemed odd that, for instance, the two Cyrillic names for Asimov are separated by a Latin script variant. Albinoflea 02:44, 18 February 2017 (UTC)
Well, the sorting is based on "Last Name" values. As per Help:Screen:AuthorData:
  • At the moment, the ISFDB Author Directory is based on the Latin alphabet
so it always uses the Latin version of the last name. It may be possible to make non-Latin canonical names appear last, but I'll have to think about it. Ahasuerus 04:10, 18 February 2017 (UTC)

Fixer 2017

When you have a chance, can you get me the lists for Fairwood Press, Night Shade Books, Orbit, Wildstorm, DC Comics and Marvel that had accumulated since last time? And if you want to get me another publisher as well (anything that is not one of the Romance/Fantasy thingies will be fine). I am slowly plowing through the deferred list of Fixer - should be done in a few weeks - some of them lead me to other things that need fixing and it takes a bit.

On a separate note - I moved the old rejected in a new list so the current rejected list has only new titles. Thanks! Annie 01:32, 20 February 2017 (UTC)

Sure, I'll see what I can do tomorrow. Ahasuerus 03:22, 20 February 2017 (UTC)
Night Shade Books done. There were a few delayed ISBNs which I should have caught, but they managed to slip through the cracks. Ahasuerus 17:36, 20 February 2017 (UTC)
The other requested publishers are done as well. Unfortunately, it's much harder (and more error-prone) to generate lists of recent ISBNs due to technical issues. I'll see what I can do about adding another publisher to the mix. Ahasuerus 18:44, 20 February 2017 (UTC)
If older lists are easier, I am fine with that. They need clearing anyway. Annie 20:47, 20 February 2017 (UTC)

(unindent) I have ~20 books left in the January list which should be in the DB by some time tomorrow - of course if I stop getting distracted with incomplete series and authors, it may work a bit faster :) . Does our favorite robot have any February leftovers he can use some help with? If not, can he find something else to send over? It is not urgent - I still have quite a lot in the deferred list - mainly very old titles that are already OOP so takes awhile to figure out if it was published at all. But they are going down steadily. Annie 01:10, 10 March 2017 (UTC)

Great! I have updated this section with the latest data. As you can see, there are no recent "n-p", i.e. "new (unprioritized) paper" books, left. The 80 "1-p" books are the ones that you have been working on and will be removed from Fixer's queue when the development server is refreshed with the live data.
At this point, we are pretty much caught up on the paper front -- thanks again! -- but we have fallen behind on the e-book front since 2015-03. Let me see if I can quickly sort out a bunch 2017-02 e-books and update your list... Ahasuerus 01:30, 10 March 2017 (UTC)
Yey for the paper books - happy to help, as little as it was. :) Sure, e-books are fine - let's see if we can get caught up a bit. Annie 01:45, 10 March 2017 (UTC)
Excellent -- 130 2017-02 e-books have been added to the list! Ahasuerus 02:09, 10 March 2017 (UTC)
Thanks! I will see what I can do with those. Quick look - I do not see as many anthologies and collections (the few I am seeing I already added earlier) so it should be faster that the previous batch. Hopefully. Annie 16:36, 10 March 2017 (UTC)
PS: Out of curiosity, why do we have so many n-p in 2015? Am I misreading something? Annie 16:39, 10 March 2017 (UTC)
And one more question. I went to add one of the ebook and realized we are also missing the hardcover: 9780756409074. I just added that one (and will keep adding any paper version of any of the ebooks above) - but any way to find out why Fixer found the ebook but not the hardcover? Annie 16:59, 10 March 2017 (UTC)
It has to do with the way Fixer works, the way Amazon handles its data and the way I process Fixer's catch. On the Amazon side, there are three major factors:
  • Their data entry people assign books to "browse nodes" (basically genres) somewhat unpredictably. For example, a juvenile time travel book may be found under "adventure" and "children's books", but not under "time travel". A couple of years later another data entry person may take a look and add "time travel", at which point the ISBN will become accessible by Fixer. Of course, it will only happen if he checks the right data, which may (or may not) happen as a side effect of another operation. It's not like I am constantly asking Fixer to re-acquire the time travel books published in 2012.
  • Different books are added to their main database at different times. Some appear many months in advance while others pop out of nowhere. There is a method to this madness, e.g. they apparently have special rules for adding Australian books, but I can never be 100% sure what may be added when. As a rule of thumb, the bigger the publisher, the earlier the data becomes available.
  • Not all editions are linked. Amazon has become much better at it over the last few years, but they still have many ISBNs that are not linked to other, related ISBNs. As Amazon's data entry people review and link their older ISBNs, it enables Fixer to identify more "stuff".
What this means is that I never know what Fixer may find. If I were to ask him to get me the "paranormal romance" books published in 2017-02, his catch might be something like "93% paranormal romance published in 2017-02" and "7% all kinds of other vaguely related stuff". For this reason Fixer's "n-p" and "n-e" queues are constantly growing, including the buckets that were cleared years ago. I try to keep up, but it's an uphill battle.
Oh well, probably more information about Fixer's ongoing struggles than you wanted :-) Ahasuerus 17:10, 10 March 2017 (UTC)
No, that was useful :) Helps explain some discrepancies I had been seeing. It's just that this is a DAW book (and DAW is a big dinosaur in the space), 3rd in a series we already have 2 of. You would think that it will pop up in the lists - but apparently not. Oh well - it is what it is. On a very separate note - is there any hidden way to ask Amazon what the ISBN of an e-book is when it is not printed inside of the book (then it is easy)? :) Annie 17:26, 10 March 2017 (UTC)
Unfortunately, I am unaware of a way go from an ASIN to an ISBN manually. The Amazon API that Fixer uses is free, but has certain limitations and... quirks, for lack of a better word. Ahasuerus 22:00, 10 March 2017 (UTC)
If there is any reference describing that API that I can look to help me go from ASIN to ISBN, I can see if I can cook up something on my end to help me? Alternatively, how hard it will be to have a page where I can list ASINs and Fixer can get them and give back ISBNs? That should help with the e-books catchup allowing an ebook to be added after its print version without waiting for Fixer to find it. Just thinking aloud here. Annie 22:16, 10 March 2017 (UTC)
Amazon has what they call "Product Advertising API". It accepts all kinds of requests and sends back the requested data. The API usually gets it right -- I'd say at least 95% of the time -- but there are quirks. The online documentation is serviceable. Their authentication mechanism is a bit painful, but once you set everything up, it's solid. The next hurdle is the fact that the returned XML is fairly complex since it needs to be able to describe tens of thousands of different products. Of course, if all you want is the book's ISBN, it shouldn't be too hard to find.
Re: "a page where I can list ASINs and Fixer can get them and give back ISBNs", sure, we can give it a try and see how it goes. Ahasuerus 00:36, 11 March 2017 (UTC)
OKey - I will start collecting the ASINs of the e-books I cannot find ISBNs for and will start a page somewhere under my talk page. And I will do some reading on the Amazon API - all I need is the ISBN - if I landed on the page, it is because either I was adding the paper book or working on an author and found it this way. All I am missing is an ISBN (and not all Kindle books print them at the start of the book - so Look Inside and "get sample" does not help - it is at the end of the book in a lot of them). Annie 00:46, 11 March 2017 (UTC)
Talking about lists - you may want to grab the Rejected list and give it to Fixer to mark the books - it had been getting quite big (unless if you are doing it now and then behind the scenes - in which case, ignore this:) ). Annie 00:46, 11 March 2017 (UTC)
Updated, thanks. Ahasuerus 01:17, 11 March 2017 (UTC)

(unindent) And that batch is mostly done - 6 remaining after all the pending are approved - a few waiting for an approval from my queue so I can clone, a couple needing some more research, for 1 I will have a copy next week so will enter it then (I think it is a new magazine so I want to confirm so I do not need to convert). Annie 04:07, 18 March 2017 (UTC)

Excellent! Ahasuerus 15:40, 18 March 2017 (UTC)

(the rest of the sub-section has been moved to a new section)

A few notes:

  • Fixer needs to get some more Rejected from here and kill them - a few comics, one that is not spec fic
    Done. Ahasuerus 15:40, 18 March 2017 (UTC)
  • A few got added to the Queue 2 list as well and need to be taken from Fixer (let me know if there is a point continuing to separate the two groups or should I just reject these as well - these are early readers, picture books and the like) Annie 04:07, 18 March 2017 (UTC)
    Thanks, I have updated Fixer's queue 2. I think it's nice to keep track of early readers etc in case we ever go back and add them to the database. However, it's not essential, so please feel free to zap them if a separate page becomes a chore to maintain. Ahasuerus 16:13, 18 March 2017 (UTC)
    No, it's not a problem. It does not make a difference which table I add it to - I was making sure I am not making double work for you. Annie 20:03, 18 March 2017 (UTC)
  • ASIN -> ISBN - if Fixer can get me the ISBNs for the ASINS here, I can add these as well (missing books from series where I do not know the ISBN and it is not visible in Look Inside and ebooks of paperbacks I added). Annie 04:07, 18 March 2017 (UTC)
    Done. Ahasuerus 15:45, 18 March 2017 (UTC)
  • Can you look at this list - just 1 book there now -- I am not sure how to enter it because something is off between the description, the cover and so on. Any assistance will be appreciated. Annie 04:07, 18 March 2017 (UTC)
    It's the Spanish version of Rick Riordan's The Hidden Oracle. "El Orßculo Oculto" is a corruption of El Oráculo Oculto. Turtleback Books specializes in library bindings. I don't know how good their bindings are, but their bibliographic data on Amazon is often messy. Ahasuerus 16:20, 18 March 2017 (UTC)
    Ah, I could not figure put the corruption. Make sense now. Annie 20:03, 18 March 2017 (UTC)

When Fixer feels like posting a new list to my page, he can go ahead :) Meanwhile I will keep chipping at the deferred list:) Annie 04:07, 18 March 2017 (UTC)

Certainly! I have added Fixer's "Priority 1" paper ISBNs from 2016 to your page. I am currently trying to clean up 2015 to make it presentable. Ahasuerus 16:42, 18 March 2017 (UTC)
I will get on them as soon as I follow up on my updates from yesterday that were waiting for a moderator. Annie 20:03, 18 March 2017 (UTC)

ISFDB-SFE3 Author Mismatches

Does the "Click Once Resolved" on the ISFDB-SFE3 Author Mismatches clean-up report permanently remove them? Or if we don't have a matching record do they come back when the report is re-run? Their scope is broader than ours and they have entries like this one that aren't within the ISFDB scope. Thanks. -- JLaTondre (talk) 14:45, 20 February 2017 (UTC)

Yes, the removal is permanent.
At this time the only way to regenerate this report is to run a script on the development server and then move the results to the live server. The process is ugly, the script is ugly and, well, the whole things needs to be rewritten. It should be possible to create a moderator-only button to regenerate the report on the fly. It's on the list of things to do, but it wasn't high enough to add it to the "Roadmap 2017" list which I posted yesterday. Ahasuerus 17:06, 20 February 2017 (UTC)

Wiki columns

Hi, Do you know whether ISFDB wiki has a template for column display, such as several at en.wikipedia? My interest is better organization of notes in my user space.

If there is none yet here, I am inclined to copy-paste one such as {col-begin} and its complements Wikipedia: Template:Col-begin. Is there somewhere such actions should be discussed first?

--Pwendt|talk 21:17, 20 February 2017 (UTC)

P.S. In my user space at EN.wikipedia.org, User:AnomieBOT last month replaced the {multicol}-family templates with the {col}-family, as the former has been eliminated everywhere. --Pwendt|talk 21:22, 20 February 2017 (UTC)
I haven't been keeping track of templates, so I am not sure. Here is a list of all of our templates -- perhaps one of them will meet your needs? If not, please go ahead and create a new one. Ahasuerus 21:48, 20 February 2017 (UTC)

Escaping woes...

When you have a chance, can you check this? Let me know if my explanation of what is going on does not make sense. Annie 20:22, 21 February 2017 (UTC)

Possible pseudonym summary bug

Hi. Go to Robert Clarke. It shows just the one title, and no way to display any others. Now go to Charles Platt, where you'll find listed Less Than Human also as by Robert Clarke. And sure enough, that Robert Clarke title + pub are there, but you can't get to them from Clarke's summary. I realize the behavior may change if we variant the lone The Day the Leash Gave Way to something canonical, so I have a submission trying to do that on hold (which may be wrong anyway). What do you think? Let me know if you want me to release the hold. --MartyD 23:26, 21 February 2017 (UTC)

I'll chime in (because of spending some time figuring out some of that mess for a different author). Show all titles pulls it up here -- which is what we expect to happen with pseudonyms, right? Isn't the idea that one title should only show up on the Canonical author and not under a pseudonym? Shouldn't that lone title just be varianted onto the canonical author, leaving "Robert Clarke" empty and we are all set? Not different from for example. Or do we need to have titles on both pages? Annie 23:32, 21 February 2017 (UTC)
That's right, titles are only supposed to appear on their canonical authors' pages. I'll wait for Marty to respond. Ahasuerus 23:35, 21 February 2017 (UTC)
Well, yes. What I was expecting to see is a "View all titles" link, such as here. But maybe I'm imagining things and it never shows up as long as there are unvarianted titles in the summary. I don't normally go looking for all titles; it's just something I wanted to do while working on a couple of submissions. If you think nothing's wrong, don't mind me. --MartyD 00:49, 22 February 2017 (UTC)
p.s. I never noticed "Show All Titles" in the Editing Tools! :-) A mind is a terrible thing to waste.... --MartyD 00:49, 22 February 2017 (UTC)
Back in the day the Editing Tools version of "Show All Titles" was the only way to get to the "All Titles" page. So many people had trouble finding it that the "view all titles by this pseudonym" link -- which takes you to the same page -- was eventually added. It's only displayed if the author has no canonical titles, but perhaps we should display it even if they are present. Ahasuerus 00:56, 22 February 2017 (UTC)
I never use the one on the page itself - I always look under Editing Tools - which is why I never noticed it was missing here... It missing in this case may be why - I usually need it exactly in cases like that (usually for merging) :) Annie 01:07, 22 February 2017 (UTC)

Fantastic Story Magazine, Fall 1954

In your verified Fantastic Story Magazine, Fall 1954 both "Younger By the Minute" should have "by" with small letters according to the conventions. I just submitted the change :) Annie 19:59, 24 February 2017 (UTC)

Approved, thanks! Ahasuerus 20:05, 24 February 2017 (UTC)

Hover over help needs fixing

Start creating a new omnibus. Hover over the question mark next to Content. It still talks about /4-6 and so on. If you click on it, it opens the correct page that shows only 4-6. Can we fix the hover help? Thanks! Annie 23:51, 24 February 2017 (UTC)

Fixed -- thanks! Ahasuerus 23:57, 24 February 2017 (UTC)
Thanks for the quick fix! I almost got tripped when it dawned on me that we changed that awhile back. :) Annie 00:07, 25 February 2017 (UTC)
Not to worry, there is a cleanup report for that as well :-) Ahasuerus 00:10, 25 February 2017 (UTC)
I know - I saw on entry there the other day and wondered why someone had the wrong format when I remember when this page was changed. That explains it I guess. I am trying not to cause my edits to go in cleanup reports... Annie 00:13, 25 February 2017 (UTC)

Publication Date Error

Something is wrong with publication dates. See this entry's Bibliographic Warnings section. Thanks. -- JLaTondre (talk) 21:12, 25 February 2017 (UTC)

I think I got it now. Thanks for reporting the problem! Ahasuerus 21:32, 25 February 2017 (UTC)

Weird Language problem

Can you figure out how this lost its language? Based on the verifications it is not a new title (it got new titles added is my best guess from what I saw in the list) and I know it was not in the report in Friday so somehow it got its language stripped. Annie 19:38, 27 February 2017 (UTC)

This is what happened apparently so never mind. Annie 20:03, 27 February 2017 (UTC)
Once all of the old titles have been cleaned up, it will be interesting to see what types of scenarios may cause problems going forward. So far we have seen a few Edit/Clone submissions where:
  • new titles were added manually, and
  • the software couldn't find the reference title
Without a reference title, there was no way to assign a language. As we discussed earlier, it is unlikely to become a major issue, but we'll see if anything else comes up... Ahasuerus 20:18, 27 February 2017 (UTC)
I did not find the note about the disappearing reference title until after I posted here because it did look weird. Once I found it, I knew what happened and was not worried anymore. :) Annie 20:22, 27 February 2017 (UTC)

Author credited only on a variant

Found one more of those: Martynas Ycas. Not sure how to fix it (I remember something like that a few months ago but cannot find it now). Annie 00:56, 28 February 2017 (UTC)

Well, variants are not displayed on Summary pages and this particular author had no canonical titles, so there was nothing to display.
Since we know that Mr. Tompkins Inside Himself (1967) is a significantly expanded version of Mr. Tompkins Learns the Facts of Life (1953, by Gamow alone), I ended up breaking the VT relationship and moving the newly "canonicalized" Mr. Tompkins Inside Himself to the "Mr. Tompkins" series as a "supporting" title. It's how we usually handle significant rewrites/expansions that are no longer "proper" variants anyway. It's not perfect, but it's better than nothing :-) Ahasuerus 01:52, 28 February 2017 (UTC)
Yeah, I figured out why it was not showing - took me a bit to see it but that part I finally figured out. I'll remember this fix (this time). :) Thanks! Annie 02:56, 28 February 2017 (UTC)

A home for suffixes?

Take a look at the Grau submission I have on hold. If we don't accept the "III" as part of the Legalname field (per the help), where would we put it? --MartyD 02:13, 1 March 2017 (UTC)

To the best of my knowledge, there is no place for suffixes in the Legal Name field. Of course, the canonical name can contain anything, including suffixes, honorifics, etc.
As I recall, the original discussion of legal names mentioned the fact that suffixes are not part of the legal name, but my current understanding is that it's not always true. A quick Advanced Search shows that we have a lot of legal names that contain suffixes. Something for the Rules and Standards page, perhaps? Ahasuerus 02:30, 1 March 2017 (UTC)
P.S. BTW, is T. E. Grau's legal first name really "Ted"? Ahasuerus 02:31, 1 March 2017 (UTC)
Dunno. He's the submitter. BTW, this one is even more interesting. :-) I will accept his submission, move the III to the end for now, and raise the issue. And I'll drop him a note about the Ted, too. --MartyD 03:31, 1 March 2017 (UTC)
To close the circle: He responded that his full legal name is Theodore Edward (so Ted2?). And FYI, he describes the III as part of his legal name. --MartyD 11:58, 2 March 2017 (UTC)
I suspect that the last part is going to be a problem if we change our data entry rules. How can we tell whether the suffix is part of the author's legal name? In this case the author is available and willing to provide detailed information, but 9 times out of 10 we use secondary sources, which are not always based on to birth/death records. Ahasuerus 17:17, 2 March 2017 (UTC)
Well, I think probably what's best is to change the help to reflect that the suffix should be omitted unless it is part of the author's legal name. We can then have some further clarification. There will be a set of suffixes that are almost certainly NOT part of a legal name and should be omitted unless reliable evidence shows them to be part of the legal name (Sr., professional and educational designations, ranks, etc.). It's really the Jr. and II+ that are going to be the gray area. And we're already capturing it (along with forms of legal name that are not known to be "legal", some of which more clearly so than others). BTW, Mr. Grau tells me his California license has "Ted"; so much for official documentation.... --MartyD 14:17, 3 March 2017 (UTC)

Future books and Fixer lists

When I am clearing the lists from Fixer, how should I treat future books(some started showing in the new lists)? I know better than to add November 2017 release that showed up from the deferred list - I will leave it deferred - but I have a few April releases in the new NightShade list. Should I add them to the DB or leave them on the list and add them after being published or closer to the date? Annie 03:24, 2 March 2017 (UTC)

We generally accept new pubs which are scheduled to appear in the next 60-90 days. Anything farther out is too chancy based on our experience. That's how Fixer is supposed to work as well, although his human maintainer occasionally overlooks something. (You know how humans can be! :-) Which is why we have cleanup reports that catch anything that is 91+ days in the future. Ahasuerus 03:37, 2 March 2017 (UTC)
So the April and May are fine, July - not so much. I know the general rules - I was more worried if there is something special about Fixer's lists :) I just tagged the future ones with dates to see how it looks like over here. Annie 03:43, 2 March 2017 (UTC)
Well, Fixer gets his data from the same secondary sources that humans do, so the same rules apply, give or take. One thing that I have noticed, however, is that an ISBN that has been delayed once is fairly likely to be delayed again. I would move them to the "on hold" list and check them again once they have been published. Safety first! :-) Ahasuerus 04:35, 2 March 2017 (UTC)
Which is why I added the dates as they stand now as prefix instead of updating the line with the date Fixer found. I think I will just wait for it to be published and will add it then if still not in the DB. Safer this way :)
Oh, and whenever you have one, you can post another list for me - I really do not care about how old or which publisher or what it contains - I need something to alternate with the deferred list. Annie 04:41, 2 March 2017 (UTC)
Will do -- thanks again for working on Fixer's catch! Ahasuerus 04:50, 2 March 2017 (UTC)
What should I do if I manually find a book whose publication date is uncertain or which has been delayed? For example this one got added when it went on preorder last November: The publisher first said it would be out in the first quarter of 2017, and now their website talks about a "likely shipping date of Q2 2017." I have the date on that record set to 2017-00-00 but that doesn't seem right somehow. --Vasha 04:54, 2 March 2017 (UTC)
I would definitely wait until we are sure that the book has been published. Just think how many publication records we would have for Last Dangerous Visions if we had acted on every pre-publication announcement! Ahasuerus 17:11, 2 March 2017 (UTC)
So should I change the date to 8888 if I find a book in the database but its publication date is uncertain -- vague promises of "forthcoming very soon"? I believe I have seen books on Amazon where it isn't made clear that the date listed is a preorder date, and you have to go to the publisher's website to find details. So some books may be mistakenly added from Amazon. --Vasha 17:24, 2 March 2017 (UTC)
Well, 8888 means that a title/pub was announced at some point in the past, but never appeared. In the types of cases that you described, "9999-00-00" ("forthcoming") may be more appropriate. However, although the software still supports the 9999 format, it's been deprecated for most practical purposes. There are no 9999 titles/pubs in the database and we have cleanup reports that look for them. It's probably better to have these questionable ISBNs removed from the database and added to some Wiki page where they can be manually reviewed by an editor. Our robot, Fixer, has a special "0" ("insufficient data") queue which serves the same purposes. Ahasuerus 18:04, 2 March 2017 (UTC)

"Problematic" Records

Hello, as one of the moderators that work every day on the cleanup reports, I've noticed recently an clear increase of "problematic" records (for example this one). This concerns unPVed records that usually seem to come directly from amazon or an unknown secondary source. The problem is that they present some slight mistakes that need to be fixed. It can be that an earlier publication have been added and the dates of the components not changed (like in the aforementioned example), some improper lengths, some pseudonyms left unvarianted, some reviews left dangling or misattributed or some magazines not linked to a series. When I stumble on such mistakes, I usually correct them and drop a line on the contributor if he/she's identifiable (either as PV or because I recognize his/her "style" of notes) and if it's a reccuring mistake. For this new type of bibliographical errors I don't easily know where the data comes from and how is it processed and moderated so I'm at loss on how to act by contacting the contributor (Is the oversight in the submission or the moderating phase?). As there are seemingly some "behind-the-scene moves" on fixer submissions (that I did usually also work on when they were public), perhaps something should be done about this as our validation process seems to be less well performing and let pass markedly more errors. Thanks. Hauck 09:06, 2 March 2017 (UTC)

Just to fuel your reflexions, this record was also problematic, in this case it was a few ESSAYs and POEMS with a length, or this title that had a "Short Story" length assigned, or this one with an uregularized publisher (it had "Paper Golem LLC" instead of our "Paper Golem"). As most of these errors should have been caught by a moderator (with our system of alarms) I'm wondering what changed in our processes. Hauck 09:10, 2 March 2017 (UTC)
Two of those [11] [12] were mistakes by me, but not the others. I really appreciate having an extra eye proofreading my work. But I have been submitting floods and floods of edits, maybe someone is just approving my work too fast without looking it over? --Vasha 15:21, 2 March 2017 (UTC)
That's also what worried me, not the fact that more recent contributors make mistakes (as we all do). Some of these errors (e.g. a pseudonym used, a "new" publisher, some types of invalid data, an already existing ISBN) are associated with warning flags at the moderating phase, such flags being seemingly ignored. Hauck 18:19, 2 March 2017 (UTC)
If you could compile a list of recently created/updated "problem" records and post it here, I could try to find the related submissions in the submission table. That should let us determine who the approver was. Perhaps we have a moderator who was on hiatus for a while and has become more active recently? Ahasuerus 19:00, 2 March 2017 (UTC)
  • This issue was approved but was without a title series and showed on our cleanup report. As the contributor is not very well versed in our idiosyncrasies, it's unlikely that she would realize her mistake and correct it herself, so perhaps should this also warrant a warning ("EDITOR record without title series"). Hauck 08:44, 3 March 2017 (UTC)
    I see that this was a NewPub submission created by VWCrist and approved by Albinoflea. We could add a warning, but I wonder if we should require the submitting editor to enter something in the "Series" field for new magazine/fanzine submissions. Ahasuerus 17:13, 3 March 2017 (UTC)
  • Same for this other issue of different magazine by the same contributor, note also that all the variants have been made by me and that I had to change the type of this text (and three others!) from OMNIBUS to POEM (requiring a warning along the lines of "Unallowed content in such container type"?). Hauck 08:44, 3 March 2017 (UTC)
    This was another VWCrist submission approved by Albinoflea. We could add a warning, but it may be better to change the software to disallow certain combinations of publication types and title types in the Contents section. For example, an OMNIBUS publication can contain NOVEL, COLLECTION and ANTHOLOGY titles, but can't contain EDITOR titles. Conversely, a MAGAZINE publication can (and must) contain one EDITOR title but can't contain OMNIBUS titles. Ahasuerus 17:24, 3 March 2017 (UTC)
These two submissions from VWCrist I remember, there was a bit wrong with one that I caught and left a note on their page about (price and pagination), so perhaps I was focused just on the things I saw right away, and missed some other things that are obvious now that I'm looking at them again.
I think a flag for "Series" field in MAGAZINE submissions would be useful, perhaps both on the confirmation and moderation stages; I know I forget to do that myself sometimes and always have to go back in and fix it. Albinoflea 18:28, 5 March 2017 (UTC)
I have been thinking about this issue. I am not sure what the standard workflow is in this case, so I am not positive what kinds of warnings/errors we may want to add. I think I will ask on the Community Portal. Ahasuerus 17:45, 6 March 2017 (UTC)
As for the OMNIBUS in MAGAZINE and other type mismatches... I'm surprised that they're not being flagged on the moderation page, but I think preventing them to begin with would be the way to go. Albinoflea 18:28, 5 March 2017 (UTC)
Right, FR 997 has been created to address this issue. Ahasuerus 17:57, 6 March 2017 (UTC)
  • This 2003 publication had an ISBN-13 (note that amazon gives 2002-04), so had that one and that one where ISBN-10 is readable on look-inside. Hauck 08:44, 3 March 2017 (UTC)
    The relevant submissions were created and approved by Albinoflea: 1, 2 and 3. We already have a warning for date/ISBN format mismatches, so there isn't much we can do on the software side. I will drop Albinoflea a note about the issue. Thanks for cleaning up these records! Ahasuerus 17:34, 3 March 2017 (UTC)
These were enter to prevent dangling reviews in an issue of SFS, and when I was almost ready to submit I accidentally closed my tab and had to re-enter all the reviews and a lot of other elements so I was a bit discombobulated... which isn't an excuse, but I can see how I would have gotten sloppy.
The ISBN warning that shows up says "ISBN already on file", but in this case it isn't, it's a date/ISBN10vs13 issue, I remember clicking through that and just winding up on my pub record, thinking that was weird but I didn't connect that in my head that I had copied the wrong ISBN from Amazon. Albinoflea 18:28, 5 March 2017 (UTC)
OK, FR 1004, "Display a warning for pre-2005 pubs w/ISBN-13s and post-2007 pubs w/ ISBN-10s" has been created. Ahasuerus 17:42, 6 March 2017 (UTC)
  • This issue had two EDITOR records (the correct one and an mistyped ESSAY as usual), in this case the moderator that didn't see it is me (IIRC). As above perhaps a check on the number of EDITOR records in a authorized publication would be in order (there is quite not a day where I don't correct such a mistake). Hauck 06:46, 4 March 2017 (UTC)
    Good point - I have tweaked FR 997. Ahasuerus 16:26, 4 March 2017 (UTC)
  • this NOVEL had two NOVEL records, the one created automatically and the one added by a overzealed contributor and approved by an overworked moderator :-) (me to be short) that was fooled by the fact that "AddPub" was used, this is quite frequent and may warrant a consistency check (no more than one NOVEL record in a NOVEL). Hauck 09:55, 5 March 2017 (UTC)
    Right, I plan to add that check as part of this FR. It will also cover missing SHORTFICTION/POEM titles in CHAPBOOK publications. Ahasuerus 14:27, 5 March 2017 (UTC)
  • A more complicated one: the author of this title was changed (likely by this submission) from "KG Johansson" (what appears on the book and Wikipedia's article) to "K. G. Johansson" (ISFDB abusive regularization). This had the net result of adding the record in the cleanup report "Publication Authors That Are Not the Title Author" and more importantly to create two orphan reviews that were correctly linked to "KG Johansson". Perhaps a warning along the lines of "This change of author will cause problem as there are reviews linked to this title" may be interesting (but perhaps too complicated). Hauck 07:52, 6 March 2017 (UTC)
  • Hm... I think it would be possible but, as you suggested, rather time-consuming to implement. Ahasuerus 18:09, 6 March 2017 (UTC)
  • There is also a clearly growing number of non-matches for our "famous" clickable categories (Non-Genre:Juvenile:Novelization:Graphic Format), perhaps a warning during the varianting phase ("CAUTION: some non-matching characteristics between varianted titles").Hauck 08:22, 6 March 2017 (UTC)
  • Yes, I can see how that would be helpful. FR 1005 has been created. Unfortunately, I don't think we can make this determination programmatically when making a title into a variant of an existing title. When one of the two titles is a juvenile and the other one is not, there is no way to determine what the correct flag value should be short of human intervention. Ahasuerus 18:06, 6 March 2017 (UTC)
  • More generally, I perceive a more "relaxed" attitude about database integrity from some our (self-)moderators. This publication had for example no less that five non varianted titles.Hauck 08:22, 6 March 2017 (UTC)
  • This author was made a pseudonym of that one but the proper variants were not created neither by the original submitter (who's likely not aware of the problem) nor by the approving moderator in a follow-up phase. Net result: there are 65 titles to variant. Hauck 18:51, 11 March 2017 (UTC)
    And here is the submission. Ahasuerus 19:16, 11 March 2017 (UTC)
Thanks, let me take a look... Ahasuerus 14:19, 3 March 2017 (UTC)
Just 2 cents here. Most of those (ISBN duplicate and new author or publisher for example) raise a flag on the page as soon as you submit the thing as well. I am in a constant state of self-rejecting submissions because as much as I pay attention, I slip now and then and I am very fast to cancel the submission, fix it and submit again - unless if this is what I am trying to achieve of course (adding a new element or duplicating the ISBN for later printing for example). Or because I stuck length where it does not belong (on a couple of essays last time - none of the above are mine (phew, I got worried there for a bit)) - I find it very useful to inspect the confirmation page after I submit something for visible issues. Annie 18:37, 2 March 2017 (UTC)
Ditto to everything you said :-) The warnings on the submission confirmation page are very useful. But I don't recall seeing a warning for length assigned to a non-SHORTFICTION title. There should be one. --Vasha 19:45, 2 March 2017 (UTC)
No, there isn't one but because they are lined up in a very nice table, it is easy to look at the list and spot those. A lot easier than on the edit screen anyway :) Annie 19:50, 2 March 2017 (UTC)
Warnings are generally used to alert the approving moderator (and now the submitting editor) about a "valid but unusual" data element. A new series, a pseudonym, a duplicate ISBN, a new publisher, etc may well be valid, but they require verification and perhaps follow-up submissions.
A length/title type conflict, on the other hand, is never valid and should generate an error rather than a warning -- just like a required field (title, author, date) generates an error if you leave it blank.
I will go ahead and create an FR to add length validation. Ahasuerus 20:33, 2 March 2017 (UTC)
It is certainly possible that the higher number of errors is caused by the higher number of submissions in recent months. I'm going to check the counts over the last 12 months to see what the numbers show.
As far as the listed examples go, the "Paper Golem" one was caused by this submission, which was created and approved by a moderator. As a general rule, Fixer always uses the same Notes format, so any publication that uses a different format -- in this case "Kindle format. Data from Amazon, 2017-03-01." -- is almost certainly the handiwork of a human.
Re: Space Wolves, where "an earlier publication have been added and the dates of the components not changed", I have checked the software behavior on the development server and found that Import/Export submissions do NOT display a warning if the date of a Content title is after the date of the publication. I'll go ahead and create an FR to implement a warning -- thanks for identifying the problem! Ahasuerus 16:18, 2 March 2017 (UTC)
Checking submission history, I see that there has been some growth, but not as much I as suspected:
+---------+----------+
| month   |    count |
+---------+----------+
| 2015-01 |    28715 |
| 2015-02 |    23138 |
| 2015-03 |    26098 |
| 2015-04 |    20895 |
| 2015-05 |    23291 |
| 2015-06 |    27899 |
| 2015-07 |    29183 |
| 2015-08 |    28882 |
| 2015-09 |    29918 |
| 2015-10 |    26666 |
| 2015-11 |    21182 |
| 2015-12 |    22207 |
| 2016-01 |    29662 |
| 2016-02 |    26310 |
| 2016-03 |    30597 |
| 2016-04 |    25209 |
| 2016-05 |    29105 |
| 2016-06 |    30918 |
| 2016-07 |    29733 |
| 2016-08 |    29318 |
| 2016-09 |    36458 |
| 2016-10 |    35730 |
| 2016-11 |    33273 |
| 2016-12 |    34283 |
| 2017-01 |    35237 |
| 2017-02 |    33914 |
+---------+----------+
I should probably use these stats to create a graph and add it to the ISFDB Statistics and Top Lists page. Ahasuerus 16:37, 2 March 2017 (UTC)
IMHO one of the reasons of this scourge of problems is that we're entering now the territory of small press magazines (sometimes of "specialized" fandom like poetry, furry, etc...) whose handling at moderator level requires some care and more importantly a lot of follow-up because of their specificities (lots of pseudonyms, variabilty of editors and/or publishers) and the profile of those entering them (to be blunt they're not very receptive to our bibliographical demands as they're not here for bibliography's sake but probably to showcase their wares). Hauck 08:58, 3 March 2017 (UTC)
FR 991 has been created. Thanks again! Ahasuerus 16:49, 2 March 2017 (UTC)
If you are doing a graph, can you also do one for NewPub/AddPub/ClonePub records only specifically? I am curious about these number a lot more than about the overall submissions (I know moderators need to deal with all of them and these numbers matter) Annie 18:37, 2 March 2017 (UTC)
We already have a breakdown of contributors by submission type, so I am sure we can have the same type of breakdown on the proposed page. It may be a bit tricky because some submission types share the same internal code (an unfortunate design decision which is difficult to undo), but it should be reasonably close. Ahasuerus 18:57, 2 March 2017 (UTC)

Author details

I've confirmed that the author details edit I have on hold was submitted by the author herself. I know we had the discussion somewhere, but I can't for the life of me find it. We've agreed to honor such requests/edits, yes? --MartyD 00:07, 6 March 2017 (UTC)

As I recall, the consensus was that we would remove the date of birth, but not the year of birth. I don't recall a discussion of the "Place of birth" field, which appears to be a new wrinkle. Ahasuerus 00:42, 6 March 2017 (UTC)
I think I'm going to have to defer to you and also direct her to you for any follow-ups. Your butt is going to be the one on the line if someone ever tries to take legal action. Let me know what you want me to do with the submission and how you want me to respond to her (I have an email to reply to). Thanks. --MartyD 11:33, 6 March 2017 (UTC)
If it's publicly available information there are no grounds for legal action. A few DOBs that I entered were reduced to YOB, but I got the info from the Contemporary Authors. Probably thousands of copies in print, not to mention online.--Rkihara 16:08, 6 March 2017 (UTC)
I have posted a Policy change proposal on the Community Portal. As far as "publicly available information" legal standard goes, we discussed it on the R&S page a while back -- see the link that I posted on the Community Portal. We may be safe when dealing with printed sources like Contemporary Authors, but Web sources, which are much less permanent, may be trickier. If an author lists her DOB in the "Bio" section of her Web site, but deletes the Web site and the Internet Archive copy two weeks later, is it "publicly available information" any more? Ahasuerus 16:18, 6 March 2017 (UTC)
Well, the ISFDB account with the hosting company is owned by Al -- I can't even have the server bounced without him -- but that's a secondary consideration. I think we need to update our Policy to cover these types of cases. I believe the current de facto policy as approved on the Rules and Standards page is similar to Wikipedia's:
  • If the subject complains about the inclusion of the date of birth, or the person is borderline notable, err on the side of caution and simply list the year.
We can eliminate the "borderline notable" part, but otherwise it seems to fit our needs. I'll post a note on the Community Portal. Ahasuerus 15:45, 6 March 2017 (UTC)
I have removed the date (but not the year) of birth for now. Ahasuerus 17:33, 6 March 2017 (UTC)
I see she has made another submission, too. I've released my hold on the first one. I think it would be best if you dealt with it. Thanks. --MartyD 03:01, 7 March 2017 (UTC)
Sure thing, I'll put it on hold and we'll see what we can do. Ahasuerus 03:30, 7 March 2017 (UTC)

Dragon Games: The Junior Novel Edit Error

Please see the PubUpdate by Anniemod for Dragon Games: The Junior Novel in the edit queue (currently third). It is producing a Python exception. -- JLaTondre (talk) 00:06, 7 March 2017 (UTC)

Looking... Ahasuerus 00:10, 7 March 2017 (UTC)
Fixed! Ahasuerus 00:18, 7 March 2017 (UTC)
What was the problem? (to know not to do that again) :) Annie 00:21, 7 March 2017 (UTC)
The developer failed to identify and test a certain use case before installing the latest patch. Bad developer, bad! No second supper tonight! Ahasuerus 00:26, 7 March 2017 (UTC)
Ah, so nothing stupid that I did in one of the edits - would not have surprised me if it was.
PS: He, who does not write code, does not create bugs :) Give a second supper to the poor developer or tomorrow morning we may find the site randomly swapping novel and non-fiction for example... Annie 00:31, 7 March 2017 (UTC)

Python error

I made an update to this publication, which resulted in a Python error. I have posted the full error message to: https://dl.dropboxusercontent.com/u/86776122/Python%20Error.pdf. I have left the pending edit in the submissions queue. The changes I made were: (1) Added text to the publication notes; (2) Entered "9" for the page on which the novel began. The Python complaint appears to be in parsing a date, which was not changed. The appearance of the error message makes it appear to be associated with entering the page number for the novel. Enjoy :-) Chavey 00:28, 7 March 2017 (UTC)

Could you please check that submission again? You created it at 7:15pm and I installed a fix for this problem (see immediately above) at 7:17pm, so hopefully the bug has been squashed. Ahasuerus 00:35, 7 March 2017 (UTC)
All better! Chavey 00:46, 7 March 2017 (UTC)

vonniewinslowcrist.files.wordpress.com

VWCrist has given us permission to link to her site for cover images. See the publication edit I have on hold as well as comments on her talk page. Thanks. -- JLaTondre (talk) 03:11, 9 March 2017 (UTC)

Done! Ahasuerus 03:43, 9 March 2017 (UTC)

Links to Deleted Wiki Pages

The database is not always detecting a wiki page has been deleted. For example, see William Shatner which has a Author:William Shatner link. But as you can see from the wiki link, that page doesn't exist. It was deleted in 2013. -- JLaTondre (talk) 23:39, 9 March 2017 (UTC)

Thanks, I'll take a look. On the plus side, once we finish the Wiki migration, it won't matter any more. Ahasuerus 23:58, 9 March 2017 (UTC)
Fixed. There were some complications on the Wiki side which came to the surface once we started deleting more Wiki-based author pages. Ahasuerus 00:28, 10 March 2017 (UTC)

Cleanup reports

Hello, just in case, I inform you that unless things change, I will no more exploit and correct data based on our cleanup reports. It usually takes me between one half and one hour a day to straighten things. It's not an uninteresting task (I've learned a lot by this) but I just simply don't want to be the one that clean the "shit" (if you pardon me the word) left by other moderators. IMHO a moderator's task is not finished when you click on the "Approve" button, there is usually a non-negligible number of perfectly tedious tasks to perform afterwards (making variants, linking pseudonyms, verifying publisher, ISBN, categories, etc...) as most of these errors will NEVER be corrected by the initial contributors, particularly if the moderator doesn't want to play the "bad cop" and call them to order. Such results or errors are simply the consequences of sloppy work. I know that we're all volunteers here, but it's not a reason to eschew all the ancillary tasks. Hauck 09:51, 12 March 2017 (UTC)

Thanks for letting me know and for all the work you have done on these reports! As I mentioned in my "Roadmap 2017" post a few weeks ago, I plan to make more cleanup reports available to non-moderators. I think I should make it a higher priority in order to "spread the load" among all editors. Ahasuerus 15:41, 12 March 2017 (UTC)

Python errors

Python is out to play on the site again so we need the developer to get him back in the playpen - over here. This is reachable via this cleanup report. Annie 17:31, 13 March 2017 (UTC)

Something really bad must have happened to this poor pub -- some of the required data is missing. I suspect that a submission errored out half way through the approval process. I'll need to adjust the affected date handling function to handle missing data gracefully. Ahasuerus 17:45, 13 March 2017 (UTC)
Poor pub - that's not fun at all. You are probably right because this one seems to have happened shortly after and looks fine (and is the same name so I suspect it was the successful attempt). Thanks for fixing it! Annie 17:55, 13 March 2017 (UTC)
The software has been patched and the pub has been given a decent burial. RIP, pub, we hardly new ye! Ahasuerus 19:49, 13 March 2017 (UTC)

(unindent)

Python error 20170313.png

Not sure if it's related, but I'm getting Python errors here, here, here, here, and here. Here's the error (image, right) for the last one, just in case it doesn't appear the same to you.···日本穣 · 投稿 · Talk to Nihonjoe 23:15, 13 March 2017 (UTC)

You are right, it was an unintended side effect of the last change. I believe I have fixed it now. I really ought to redo the way the software handles dates sooner rather than later. At the moment it's one band-aid on top of another... Ahasuerus 23:34, 13 March 2017 (UTC)
Everything seems to be all better now. Thanks! ···日本穣 · 投稿 · Talk to Nihonjoe 00:30, 14 March 2017 (UTC)

Magazine Wiki pages not linked to Magazine records

The Magazine Wiki pages not linked to Magazine records clean-up report is listing two pages that were deleted last year. They are listed with a space before the name so maybe something related to that is confusing the code. Minor issue. Thanks. -- JLaTondre (talk) 14:39, 14 March 2017 (UTC)

The problem appears to be on the Wiki side -- this list of all Magazine pages includes " Fantastic Adventures" and " Isaac Asimov's Science Fiction Magazine" even though the pages were deleted 5 months ago. I am 95% sure that it was the leading spaces/underscores that tripped the Wiki software. I'll see if I can get the cleanup reports to ignore these pages. Thanks for reporting the issue! Ahasuerus 15:28, 14 March 2017 (UTC)
The cleanup report logic has been adjusted. The report should find no matching Wiki pages tomorrow morning. Ahasuerus 16:45, 14 March 2017 (UTC)
I thought they were just sitting there until the report is discontinued :) Good to see them gone! Annie 17:58, 14 March 2017 (UTC)

A Song for No Man's Land

Hi,

Is this series completely the work of Fixer? Tor calls all 3 novellas and I have the first one and it may be borderline but it does not qualify as a novel I think. If it is Fixer's I will overhaul it to become novellas (I have the collection with all 3 in the e-books list I am adding and if they stay novels, it will be an omnibus). If it is not Fixer's, who added it (so I can go discuss with them)? Thoughts? Thanks! Annie 17:23, 15 March 2017 (UTC)

I see that none of the pubs have been verified. According to notes, some of them were added by Fixer and some were entered by a human who used Locus and Amazon as his or her source.
When a publication is not verified, it's generally safe to change things like the title type. It's only when higher profile issues arise, e.g. "Does the book even exist?", that it becomes worthwhile to investigate the origins of the pub. So change away! :-) Ahasuerus 17:41, 15 March 2017 (UTC)
I will also be verifying the first (and I am pretty sure I have the other 2 somewhere in the boxes). Just wanted to make sure I do not step on someone's toes. :) Annie 17:48, 15 March 2017 (UTC)

option to add parameter not working in notes template

Hi -- there doesn't seem to be the option to add the name parameter, like the help says, when using the {{A}} template in notes. See this submission. Is it disabled on purpose? --Vasha 00:44, 16 March 2017 (UTC)

How are you trying exactly? Because I had added quite a few in the last few days with no issue? {{A|Seanan McGuire}} for example works as a charm? Annie 00:48, 16 March 2017 (UTC)
The help page says you ought to be able to use an extra parameter to display the name differently than the canonical form, for instance {{A|Charles P. Baudelaire|name=Charles Baudelaire}} ought to display "Charles Baudelaire" but instead it displays "Charles P. Baudelaire|name=Charles Baudelaire" --Vasha 01:06, 16 March 2017 (UTC)
This page that you linked is for the templates in the wiki - see the first sentence in the page. The one for the Notes fields is over here :) They do work differently. Annie 01:08, 16 March 2017 (UTC)
OK, thanks for that info. Ahasuerus, is there some reason the notes templates don't use extra parameters? --Vasha 01:16, 16 March 2017 (UTC)
Well, ours is a homegrown implementation, so it's limited to the functionality that has been requested. At the moment the software supports templates without parameters (like "Tuck") and templates that take one parameter (like "OCLC".)
Early on I considered adding support for a second, "display value", parameter, but I decided that it could be somewhat dangerous. If we were to link to one author record but display another, it could mislead our users.
However, there are some third party sites like SFE3 that have non-intuitive URLs for record-specific pages like "www.sf-encyclopedia.com/entry/deford_miriam_allen". If we were to create a template to link to them, we might have to add support for "display value" parameters after all. Ahasuerus 01:54, 16 March 2017 (UTC)
I wanted to use it to link to a disambiguated author without displaying the disambiguator. --Vasha 02:25, 16 March 2017 (UTC)
That's exactly what I was worried about. There is a big difference between "Written by Stephen King (I)" and written by "Stephen King" :-) Ahasuerus 02:52, 16 March 2017 (UTC)
But the disambiguator by itself is not informative to a casual eye. One of the purposes of linking to the page is so a person can check who it is if uncertain. Is it really so important to have a cryptic disambiguator displayed? --Vasha 03:02, 16 March 2017 (UTC)
My take on it is that we display the full disambiguated name on all bibliographic pages, so it would be consistent to display it in notes as well.
Of course, if there is sufficient support for a "displayed value" parameter, I will implement it. I may need to do it to support sites like SFE3 as per the discussion above anyway. Ahasuerus 03:11, 16 March 2017 (UTC)
OK I'm taking this debate to the Community Portal then. --Vasha 03:22, 16 March 2017 (UTC)

Pub series report question

Hello, is there a report similar to Series with Numbering Gaps but for Pub series? Annie 01:24, 18 March 2017 (UTC)

I am afraid not. Pub. series numbers are not limited to actual numbers, so we would have a lot of false positives -- see Ace Double for a classic example. Ahasuerus 01:52, 18 March 2017 (UTC)
Ah, did not think of that problem - even though I entered some non-numerical ones just last week. Never mind. Sorry for bugging you :)Annie 01:56, 18 March 2017 (UTC)
Not a problem! Ahasuerus 02:15, 18 March 2017 (UTC)

Fixer's future

This may be a good time to pause for a minute and consider "lessons learned". For example, is this a viable way going forward? Would the previously discussed (and fairly time-consuming to implement) "automated queue" approach save a lot of time in the long run? Should we post these lists on some Wiki page so that multiple editors could grab blocks of ISBNs and move them to editor-specific sub-pages along the lines of what you have been doing? Ahasuerus 15:40, 18 March 2017 (UTC)

I had been thinking about it - these last few batches were very different from the previous ones (SF publishers and the Comics guys). I would still love to have the complicated system we were discussing but... I think that something a lot simpler and easier will be a better fit (and will be reusable for other jobs as well). The following things were very useful in these few batches:
  • the ability to work on a series at one time (add one, back button on the browser, change name, number in the series, ISBN, pages, picture and cover artist and eventually price and submit again. Rinse and repeat. And that was not just for format series -- working on the chapbooks of a single author was following a similar pattern.
  • the very manual decision when to do AddPub (when nothing but the name/author matches and there is no content) compared to ClonePub (when content is already there and mapped.) What was also critical here was that if you edit the already existing record, you get your new record right from the first attempt as opposed to Clone, then an edit on both and eventually an import/export operation
  • The ability to wait - if I have 3 versions of the same book in the queue, I need to create one, wait for it to be approved and then get the ClonePub for the other 2(or more).
With this being said, that's what really make the job easier:
  • Identify all the books that need entering by an author or publisher
  • The ability noone to pull the books from under your nose when you are in the middle of a series
  • The ability to wait.
So how about giving Fixer its own DB table with a structure such as "id, ISBN, ISBN link, title, author, publisher, price, pages, format, editor, date". No cleanup - just the format you are posting in my lists but in a table format. Then create a new page that allows an editor to search in the table based on publisher, author and title (wildcards allowed) and then "reserve" the entries (which sets the date and the editor field). Have Fixer remove ISBNs already in the DB once a day (or more often if needed). And allow the editor to set the "editor" field to "reject" or "queue2" (not mandatory - just saving time; wiki can work as well). The date is needed so that in case an editor stops dealing with things, things can be released (30 days after something is "taken", it gets released. That allows enough time for waiting.
Now... we can make it fancier and instead of the editor submitting manually after cleaning the author name, title, publisher and so on, instead to prepare data for automation in a separate table (all fields cleaned) but if addPub changes to clonePub fast based on what happens so there will be followups anyway. And that may as well become step 2 in the plan anyway. Plus if we have the ability to work with lists this way, it may open the door for other sources data to be added for processing and additions.
Now - there can always be a "Populate in "AddPub (requiring ID)/NewPub" to start off the process... but from the quality I had seen from these lists, full automation will never work. :)
And that also will mean that if you need to take a few weeks, there will be enough ISBNs to work on if someone is interested. However - whoever works on these will need to understand that the job is not "copy and submit" but really check every field both in online sources and in our DB (is that the same author or do we need disambiguation, what is on the title page for the title, how is the Publisher entered in our DB, who is the Publisher actually (Look Inside saves the day for that), is that the first publication under that title and if not, what is the date for the first and can it be added first or does the title record need a change, fix series and pub series and so on). "Add and then update" will end up with no updates very easily and it is not that hard to deal with it while adding
Sorry for the long post - just putting on paper (well...) what I had been thinking. Annie 20:03, 18 March 2017 (UTC)
And despite the long post, I managed not to answer one of the main questions - because of what I was explaining about series and authors and so on, posting lists that everyone can grab a section out from is going to split the logical groups. If you want to post full groups and someone can claim a few group, then yes, that will work. I am not sure if it makes enough of a dent into the queues but the more I am doing of those, the easier it gets (especially when I start recognizing the publishers:)) - I hope it is still useful - no matter how little it helps. In the long run, yes, that may be a usable pattern to get other editors involved. With all the caveats above. Annie 21:15, 18 March 2017 (UTC)
Very interesting, thanks! I will take me some time to absorb everything, but here are my preliminary thoughts.
First, it occurs to me that I may need to change the way Fixer's data acquisition and manipulation logic works. At this time Fixer's data acquisition processes (there are a few) capture and store raw data. It's only when Fixer presents each record for my final approval and adjustment -- right before I create the actual submission -- that he corrects the data based on the rules that I have added over the years. For example, one of the rules says "If the stated publisher is Hollywood Comics, change it to Black Coat Press." Another rule says "If this is a US pub AND the binding is "paperback" AND we don't know whether it's a pb or a tp, AND the price is more than $8.99, change the binding code from unknown to tp." The reason why it has worked sort of "OK" up til now is that I have been the only person creating submissions. Now that we are opening up the process, we need to apply the data transformation rules when Fixer captures the data. I have already tweaked things a bit, but further changes will be needed. Ahasuerus 23:29, 18 March 2017 (UTC)
I suspected as much considering the way you had been posting the batches. :) But yeah - if the data is prepared, that should make it easier Annie 23:47, 18 March 2017 (UTC)
Second, the proposed DB table with a structure such as "id, ISBN, ISBN link, title, author, publisher, price, pages, format, editor, date" seems similar to the "automated" queue which we discussed earlier. The functionality is somewhat different, but it would capture the same data elements. Also, if couldn't be a single new SQL table since a pub can have multiple authors. It would have to be at least 3 new tables. The overall complexity seems similar to what would be required to implement the originally discussed "automated" queue. Ahasuerus 23:29, 18 March 2017 (UTC)
No, a single table will be fine. If it is multiple authors, it will be just as they are caught. Think of it just as parsing of the current line you are posting. It won't be used for posting and cleaning the data but for data source - instead of the line in a wiki I am using now. So - it does sound similar to my old idea but it is not trying to be a submission (for now) - the editor will decide where the author names break and if they break (so the "Hinks, Darius" I got in the last batch is easily recognizable by a human as Darius Hinks; automation will attempt to assume two names. Same with weird new publisher names and so on. It will be just the cleanup you are doing now for the lists - the rest will be the editor. And with a search supporting wildcards, if I am looking for "Nora Roberts", it will give me "Nora Roberts" and "Nora Roberts, some other names" and so on. And "Black Library" will find me all the books have the string as part of the publisher. A second table for the editor name and date is fine but a single time will do. Of course it can be overcomplicated - but I am really just thinking of a source, almost not cleaned table. Annie 23:47, 18 March 2017 (UTC)
Oh, I see. So there would be no normalization, just a raw data dump like the Wiki sub-pages that we have been using. It would certainly be easier to implement, but would it offer significant advantages vis a vis the current Wiki-based system? I guess it would make it easier to search for authors and publishers. However, I could easily tweak the logic which exports data from Fixer's queues to Wiki pages to sort by publisher and then author, which may accomplish the same thing.
Yep. Think of it as a superbatch that does not force someone to do search and extract from a wiki page. Think of this scenario. I found the 10th entry in a pub series we have none of. If we had the superbatch already keyed in, at this point I can go and pull all from this publisher - or search by the name (all of them have the series in the name). It is exactly like the wiki page - except that instead of you sending batches when someone asks for some, all those old e-books from the list we were looking at are already loaded in. In my example - the other 9 books are not in the 2017-01 list you gave me - they may be hiding in the batches between 2015 and 2016. Meanwhile - if you can sort by publisher and then by author, that will make it easier on the wiki page :) The problem with staying on a wiki page is that when a new entry is added, the order in the batch will be broken unless if you regenerate it every time.
I know that you are preparing batches as they are asked for but we know these e-books (for example) won't get miraculously added and a lot of them are in a bad shape and need a lot of human eyes on them - that will be a way to deal with them. And anything new showing up in old months/years, gets thrown there. I am not sure how much work you will need to do to prepare all of those batches but... it needs to be done anyway. Let me know if that makes sense?
You know, we can always start on a wiki page, see how it goes and if the plan of "go and pull the additional ones when I need them" work, then think of a DB solution. This will take zero development but will require a bit of work from you to get all the batches in wiki. Annie 00:32, 19 March 2017 (UTC)
And one more note - just to explain why I am talking about a super batch. The easiest way may be to simply explain my process:
* Check if we have the author name in the DB. If not - is it possible that the book actually has a different author - check the title page and what the Amazon page says about the author. If yes - is it the same one or a new disambig?
* Do we have a different version of the book already? If so - this is a ClonePub/AddPub and it needs normalization. Or it is new NewPub and a variant if it is the UK guys dropping/adding a "the" again (a lot of those in the old batch) but we are ignoring that part (for the writeup that is - not for the DB).
* Publisher: Do we have it in the DB? If we have a few matching, which one is this one? Or is it a new one? What name needs to be used?
* If it is a newPub, is it the earliest under this title - if I am adding the e-book, do we have an earlier tp or hc that the DB does not have and so on (if there is one, there will be title re-dating so I would rather add the first one first - especially of there are covers and content). And once I add 1, the others are trivial so I may as well clone after that and get them all in. This is where some of that whole ASIN-> ISBN thing came from.
* Is there a cover artist credited on the title page?
* Is this part of a series? If so, do we have the earlier entries? Are there newer ones already out?
* Is this part of a publisher series? If so, do we have the earlier entries? Are there newer ones already out?
* If it is a collection/magazine, figure out what is imported and what is added.
* If it is an omnibus and we are missing one of the novels/collections and so on, add that first
So if we have the data in a list already, at any point I am looking for missing elements, I can start with our Fixer lists - as much cleanup as it needs, it is still faster and better than going solely from Amazon/publishers - verifying is easier than finding information.
Oh, and if you see something above that I am overthinking/overdoing, let me know. Annie 01:02, 19 March 2017 (UTC)
I'll check it again tomorrow morning, but it's pretty close to what I do on my end. At one point I documented some parts of the process here, but it's incomplete. We may want to use your write-up to beef it up. Ahasuerus 04:16, 19 March 2017 (UTC)
Another thing to consider is maintaining Fixer's queue entails two separate tasks which cover different use cases. There is "historical cleanup" -- see the updated queue sizes for 2012-2016 on Fixer's User page -- and then there is ongoing maintenance. The "historical" task is easier since to delegate we can select ISBNs by author, publisher, etc. The "ongoing" task is harder to delegate because we only have so many books per publisher/author each month. I am not sure how we could optimize it... Ahasuerus 00:09, 19 March 2017 (UTC)
Anything that is 3 months old or newer receives a flag and is considered a priority? But it should not stop editors from working on older books as well... This whole plan is better suited for the historical and for the ones that popup later for older months. Maybe we need to non-collate them together and leave the "very new" ones (2-3 months?) separate and handled in a different way/list? One problem at a time.
On the other hand, it is a title in the current that is likely to send you to the old lists to find earlier volumes and the like (if you care to do it - just adding is fine I guess, I am just trying to fill gaps while going anyway). Annie 00:32, 19 March 2017 (UTC)
Thanks, I think I have a better idea of what the proposed process would look like now.
I'll have to sleep on it since it would alter Fixer's logic in certain ways. There are processes in place which link Fixer's database with the main database, so I would need to adjust them accordingly. Let me think about it... Ahasuerus 04:19, 19 March 2017 (UTC)
It's not urgent and if you think it will be useful to work through some more older batches to see if that changes the perspective, I am fine with that. Annie 04:34, 19 March 2017 (UTC)
Unfortunately, I woke up sick this morning and thinking is difficult at the moment. However, one thing seems clear: whether we use the Wiki or whether we add software support for the proposed "superbatch" functionality, I will first need to clean up Fixer's database as per the discussion above. I'll see what I can do. Ahasuerus 18:32, 19 March 2017 (UTC)
Hope you feel better. All this can wait - take care of your health. Annie 00:27, 20 March 2017 (UTC)
Thanks! Ahasuerus 00:36, 20 March 2017 (UTC)

(unindent) I had been thinking some more (dangerous thing...). The superbatch by definition will not contain any new-ish titles but it may be used during adding new-ish titles to find earlier records and/or entries in a series. I had been looking at how I am working on things and discovering that the part where it will be crucial is the e-book editions - Amazon is good enough to connect the dots but not to find a needed ISBN. It will simplify some other cases (aka track the series) but between Amazon and other online sources the information is there.

So as a first step (or one step), an ASIN-> ISBN map for all ebooks Fixer caught may be enough to enable a lot smoother entering of records. That won't help with this becoming more manageable if more people decide to help (as much as I love the personalized batches, I doubt that you would like to create 3 per day -- this is where the superbatch will come into play) but it will help on the ability to get the correct records in play - it does not take that much to add the ebook when you add a hc or tp if you can find the data easily.

And all that does not even come to start solving the big issue of helping you with the entering of the new books - it may start cutting into the backlog and help with later found ones but my understanding is that there is a huge amount of new ones (current month and future) which are very different to work with - no Look Inside in a lot of cases, not correct dates (see the 6 future books I am keeping in my queue - 2 of them already changed dates). I am thinking on what can be done of that as well... no bright ideas yet.Annie 18:59, 20 March 2017 (UTC)

I have a few thoughts re: your questions, but I'll need some time to organize them. In the meantime I have created a couple of sub-pages for "public" projects under User:Fixer. Please feel free to grab whatever you like as per the process outlined on Fixer's User page. The first (experimental) list may not be very exciting, but I wanted to get the recently published J. H. Sweet ISBNs out of Queue 1 sooner rather than later. Once everything looks OK, I will post an announcement on the Community Portal.
I have also added "Queue 4" to Fixer's internal database. Fixer's logic has been modified to move all "public" ISBNs to Queue 4 as soon as they are made publicly available. The improved process should help avoid collisions with other Fixer activities. I plan to continue tweaking Fixer's "public" logic to make the data more useful. Ahasuerus 23:45, 20 March 2017 (UTC)
No hurry. And I may be vastly underestimating something. Considering some of the titles that I had been adding lately (the paranormal romances), these look pretty good actually - and looking at it, it will allow somewhat consistent additions. As soon as I am done with the few remaining in my lists (the two 2015 lists basically), I will get these out and work through them. Do you want me to use the Fixer rejected page or my own going forward (especially for the differed list that I hope to be done with soon-ish(TM) - for those new ones I will follow the standard new process :) ?Annie 23:51, 20 March 2017 (UTC)
It's probably best to keep all manually rejected ISBNs in one place. The fewer Wiki pages Fixer needs to monitor, the lower the chances that he will miss something :-)
Not a problem. :) Annie 00:08, 21 March 2017 (UTC)
I should probably create a few more sub-pages with "public" lists in case other editors want to jump in immediately. Any requests? Ahasuerus 00:00, 21 March 2017 (UTC)
If you can get me whatever he has from Publisher "Grinning Skull Press" (I've been planning to go and try to find the ISBNs after adding #10 in a publication series) - does not seem to have more than 30 titles. Annie 00:08, 21 March 2017 (UTC)
That was a good guess. Fixer had 34 "Grinning Skull Press" ISBNs in his database -- sub-page created. Ahasuerus 00:34, 21 March 2017 (UTC)
Other from that - smaller lists as "starter kits" may be useful for anyone that wants to jump in? :) Annie 00:08, 21 March 2017 (UTC)
Yes, it makes sense. I'll have to see if I can compile a few self-contained lists that would be both interesting and manageable. Ahasuerus 00:34, 21 March 2017 (UTC)
On a somewhat different topic and just out of curiosity - what is the priority between the 4 queues that Fixer has listed? 1-p, 1-e, then n-p, n-e? Annie 00:08, 21 March 2017 (UTC)
I just stole your first list - it's exactly what I was explaining above about series - once one is in place, the rest will fall into place fast. Probably will take a few days (mainly waiting for moderators time...) Annie 00:19, 21 March 2017 (UTC)

(unindent) I seem to be missing at least 60 e-books from J. H. Sweet (the kindle versions of the 60 "Fairy Chronicles" paperbacks are missing for sure, maybe some more). The ISBNs visible in the Look Inside are the ones of the collections (in a "excerpted from ... fashion). I can collect the ASINs while going through them to see if that will pull ISBNs but if you can check if some e-books are hiding in lower queues, that may speed up things. They may not have ISBNs - which is fine and I will add them that way. But I'd like to check. Here is an example ASIN: B01BDOPO8E Annie 00:53, 21 March 2017 (UTC)

No ISBN on file, I am afraid :-(
I guess what this means is that we need to add support for third party identifiers soon-ish if we want Fixer's "public lists" to be more comprehensive. The functionality is on my short list for 2017 anyway. Luckily, it shouldn't be that hard to do. Ahasuerus 01:02, 21 March 2017 (UTC)
I kinda expected that considering what I was seeing on the Look inside... Should I add them as #ASIN (#B01BDOPO8E in this case) or just leave the ISBN field empty? Annie 01:04, 21 March 2017 (UTC)
ASINs are vendor-assigned, so they are not considered true catalog IDs. One could argue that they sort of are catalog IDs if the book was published by Amazon, but it would open a can of worms. For now they go into Notes. We even have a brand spanking new template for them :-) Ahasuerus 01:26, 21 March 2017 (UTC)
Which is why I am asking before starting on them - so I do nor need to redo. Empty ISBN it is - plus adding the ASIN in the notes. :) Annie 02:03, 21 March 2017 (UTC)
Some special notes on cases where Amazon recognizes the same ISBN for both paper and e-book version: it grabs number of pages and price from the paper version, date is the e-book one, the type is always set to Kindle. Example:
  • 9781936660018 The Eternity Stone (The Time Entity Trilogy 2) by J.H. Sweet, 2011-04-16, Kindle Edition, 130pp, $9.99
The Kindle page number is 132, the price is $3.99. Maybe it is stale data? Who knows - but there were quite a lot of these in Sweet's batch. It seems to collate the two versions because they respond to the same ISBN. Or something. Annie 22:57, 22 March 2017 (UTC)
Interesting. I have seen similar cases in the past, but they seemed to be random. I don't think I have come across an author with a significant number of cross-pollinated ISBNs before. I wonder if it may have had something to do with the fact that so many of her books were released at the same time? Ahasuerus 23:32, 22 March 2017 (UTC)
I think it has more to do with the fact that the same ISBN is assigned to two different formats - and that throws the wrench into the acquisition of data... Where the ISBN is not set for the e-book, all is good but the cases where the e-book is a digitized version of the paper one and the ISBN is assigned to both in Amazon, the above happened.
Although I would not discount the theory that somewhere in that massive February dump, some books just got messed up. But the explanation of what came from which version is always consistent for these messed up ones. I untangled them all on entering but it is an interesting thing to keep an eye for in the future. Annie 23:38, 22 March 2017 (UTC)


Fixer: ASINs

Can you ask our favorite robot for the ISBNs for the ASINs here? I've added the title I am expecting - feel free to ignore it - it is for my comparison later. The lower list is the ones I already have - only the ones under New are the ones I am looking for. Thanks! Also - let me know if you prefer me to put these requests on the Fixer page instead of yours :) Annie 02:48, 23 March 2017 (UTC)

Done. This page is fine for now. I am sure we'll streamline the process as we go forward. Ahasuerus 03:03, 23 March 2017 (UTC)
Thanks! Works for me - I will be collecting them and just asking once in a while. :) Annie 03:17, 23 March 2017 (UTC)

Second Stage Lensmen

In the Notes section of Second Stage Lensmen it says it is the Fifth Printing. Shouldn't that be Sixth Printing? --AndyjMo 17:24, 24 March 2017 (UTC)

You are right, it says "Sixth printing" in my copy. It may have been a mistake on my part or it may have been changed in a subsequent edit, but either way I have corrected it now. Thanks for noticing! Ahasuerus 17:39, 24 March 2017 (UTC)

Ignore option missing on Non-Latin Authors.... clean-up report?

I tried to field this request, but the report doesn't give me an ignore option. I see it in other reports, just not this one. --MartyD 12:08, 25 March 2017 (UTC)

I have responded on the Moderator Noticeboard, but it leaves the larger question unresolved. If an author lives in country A, but his works are mostly published in country B, there can be a legitimate mismatch between his legal name and his working language. A quick scan of the data finds Павел Амнуэль, who was born in Azerbaijan, lives in Israel and publishes SF in Russian. Our current value of the "Legal name" field presumably reflects his Soviet-era legal name and needs to be corrected. In this case the mismatch is between two non-Latin languages, so it won't appear on the cleanup report, but sooner or later we'll probably come across a Latin/non-Latin mismatch. Ahasuerus 14:51, 25 March 2017 (UTC)

Tales of To-Morrow No. 2

Hi, for the sake of consistency I renamed Tales of To-Morrow No. 2 into Tales of To-Morrow, Issue 2, October 1950.--Dirk P Broer 13:10, 25 March 2017 (UTC)

Sounds good! Ahasuerus 14:24, 25 March 2017 (UTC)

Fixer: Necon E-books and May December Publications

Two more requests: can you get everything with a publisher set as "Necon E-books" (or Necon or anything containing Necon) and "May December Publications". Necon have a few publisher series I want to finish and this will be an easier way to find the ISBNs and May December popped up a few times for the "Grinning Skull" authors so I can as well work through them as well. Thanks Annie 04:04, 25 March 2017 (UTC)

Sure, I will poke around. Ahasuerus 04:29, 25 March 2017 (UTC)
May December Publications done. Necon was a bust in that Fixer was aware of only one additional publication, now entered. A quick scan of Amazon's data suggests that they use ISBNs sparingly. of course, Fixer has no way of submitting ISBN-less pubs (yet!) Ahasuerus 19:53, 25 March 2017 (UTC)
Thanks! No worries - I will add the Necon books based on Amazon and their site. Knowing that they have no ISBNs helps. Annie 19:56, 25 March 2017 (UTC)

Possible bug in award records if an author has been removed from the linked title record

Hi. I just came across some strange behaviour which might be a bug: the cover artist "Horst Gotta" has been removed from this cover art record, and the award record correctly shows one author only. However, if I edit the award record "Horst Gotta" is still shown as second author. Jens Hitspacebar 19:03, 28 March 2017 (UTC)

The way awards handle titles and authors is a bit odd due to the fact that we support both title-based and "other" awards.
For "other" awards, the title and the authors are entered manually when the award record is created. The entered data is then used for display purposes.
For title-based awards, the title and the authors are copied from the title record when the award record is created and are not editable. However, the display part of the ISFDB software always uses the title and the authors of the title record as it exists in the database. There is really no reason to capture this information for title-based awards, but that's how it was originally done. I guess I should update the "Award Editor" page to retrieve the title/authors information from the current title instead of the award record. Ahasuerus 19:22, 28 March 2017 (UTC)
Thanks for the explanation. I had already guessed from a look at the award table's fields that this might be the cause but thought I'd better post here to make sure that there's no data corruption going on there. Jens Hitspacebar 20:00, 28 March 2017 (UTC)

Thanks for accepting these last changes

My bad - managed to get distracted and copied names of books instead of names of stories for those two. Which is why I am always reviewing what I sent updates for. Now I think I will just go and add the two books later today :) Annie 20:50, 28 March 2017 (UTC)

No problem, things happen! Ahasuerus 21:02, 28 March 2017 (UTC)

Bibliographic warnings and unpaginated publications

I've got an editor who submitted page counts of "0" in unpaginated (electronic) pubs to try to eliminate the "missing page count" bibliographic warnings. Any thoughts? I could just let the 0s through; I did with a couple that I didn't catch, and I think some other mods allowed more through. --MartyD 02:58, 31 March 2017 (UTC)

That's odd. Normally the software doesn't generate "missing page count" warnings for e-books. For example, this title page warns you about a missing price, but not about the lack of a page count. Do you recall which titles/pubs were affected? Ahasuerus 15:19, 31 March 2017 (UTC)
Chiming in as I had seen these: webzine records: here. Can they be suppressed in this case as well? Annie 16:40, 31 March 2017 (UTC)
Ah, I see. I guess we didn't have all those extra binding codes at the time we disabled "missing page count" warnings for e-books. I have added webzines, audio books and digital books to the exclusion list. Thanks for reporting the problem! Ahasuerus 17:27, 31 March 2017 (UTC)
It's better to be lucky than good. I didn't realize I was reporting a problem. :-) Thanks for the chime-in, Annie, and for the quick action. --MartyD 18:11, 31 March 2017 (UTC)
Here is an example. It's an emailed publication, given type "MAGAZINE" and binding "Other". Maybe have the binding be ebook? --MartyD 18:14, 31 March 2017 (UTC)
Well, Help says that "publications distributed via e-mail, on CD-ROM and other uncommon formats" should be entered as "other". Perhaps ask on the Rules and Standards page to see if there is support for changing it? Ahasuerus 18:20, 31 March 2017 (UTC)
Considering that this is not just available via mail but also as a webzine from their site, won't the easiest solution be to convert it to a webzine like Strange Horizons? Annie 21:20, 31 March 2017 (UTC)
I should probably leave the decision to those who are actually familiar with these new-fangled "webzine" thingamabobs :-) Ahasuerus 22:03, 31 March 2017 (UTC)
Me three. But I did make the suggestion. --MartyD 02:19, 1 April 2017 (UTC)
The newer records for Daily Science Fiction are always put in as webzines. Being as this one isn't verified, I don't see any reason not to change it to a webzine. Vasha 17:14, 1 April 2017 (UTC)

New bug?

Hello, just FYI, when using firefox (24.4.0), I found now (from the installation of r2017-150?), impossible to use the "Add Cover", "Add Title", "Add review" and "Add Interview" buttons (no result after clicking) when editing a publication. The "intermediate" buttons ("Add Artist", "Add Author", "Add Reviewer", "Add Interviewee/er") works fine. Everything seems OK with IE. Hauck 14:45, 1 April 2017 (UTC)

It looks like the new EDITOR code doesn't play nice with older browser versions like Firefox 24, which was released in 2013. (The current version of Firefox is 52.) I have disabled it for now pending an investigation. If you force a reload of the browser window using Control-F5, everything will hopefully go back to normal. Never a dull moment :-) Ahasuerus 15:26, 1 April 2017 (UTC)
Alas, there's still the same problem. The old version of firefox that I sometimes use is the one on my professional (and completely locked) computer. Hauck 15:32, 1 April 2017 (UTC)
Investigating... Ahasuerus 15:38, 1 April 2017 (UTC)
The pop-up validation code has been rewritten using older, hopefully backward compatible, features. When entering/editing the next publication, please force a full page re-load (Control-F5 in most browsers.) I have no easy way to test the code using older browser versions, so it's possible that something is still off. If you encounter any issues, please let me know. Ahasuerus 16:56, 1 April 2017 (UTC)
It seems to work. Thanks. Hauck 17:21, 1 April 2017 (UTC)
Great! Ahasuerus 17:24, 1 April 2017 (UTC)

Import content bug

Hello, when trying to import content into this fixer-submitted COLLECTION that I approved, I've got the following message "Error: When editing/cloning publications, the Content section must contain one reference title." and can't import the contents. I've also noticed this for CHAPBOOKs (also created without contents by fixer), where the problem is avoidable (but needs one extra submission) by entering manually the shortfiction and merging afterwards; and also for OMNIBUSes. Hauck 07:03, 3 April 2017 (UTC)

Same with trying to import a cover artist record. --Vasha 10:44, 3 April 2017 (UTC)
Thanks, investigating... Ahasuerus 11:18, 3 April 2017 (UTC)
Could you please try again? I have installed a patch to correct the problem. Ahasuerus 13:19, 3 April 2017 (UTC)
OK now--Vasha 13:47, 3 April 2017 (UTC)
Excellent! Ahasuerus 18:28, 3 April 2017 (UTC)

Cloning Error

Cloning is giving the following error message: "Except when editing publications; the reference title shown should not be entered in the Content section. It will be added automatically at submission creation time." -- JLaTondre (talk) 01:06, 4 April 2017 (UTC)

Fixed. Or at least I hope the third time's the charm. Ahasuerus 02:04, 4 April 2017 (UTC)

Is Not Title Type Searches

It looks like you cannot do a "is not" title search with Title Type? I was trying to filter out a title type, but "is not exactly" (which is what I thought should have worked) and "does not contain" have no effect when using a title type. -- JLaTondre (talk) 21:12, 11 April 2017 (UTC)

Sounds like a bug. Could you please post a sample URL? Ahasuerus 21:50, 11 April 2017 (UTC)
Author's name contains Asimov & Title Type is not exactly shortfiction[13]
Author's name contains Asimov & Title Type does not contain shortfiction[14]
Not sure if the second one should work or not, think the first should, but both return shortfiction. Thanks. -- JLaTondre (talk) 22:30, 11 April 2017 (UTC)
Thanks! The bug has been recreated and fixed on the development server. It will take a few days to deploy the fix on the production server since I am in the process of reorganizing verifications. Ahasuerus 23:20, 11 April 2017 (UTC)
Fixed! Ahasuerus 14:46, 16 April 2017 (UTC)
Awesome. Thank you! -- JLaTondre (talk) 15:02, 16 April 2017 (UTC)

Weird submission

Hello, submission #3399236 gives a strange result: " ICAP Error (icap_error) An error occurred while performing an ICAP operation: Fatal error while decoding request/response There could be a network problem, the ICAP service may be misconfigured, or the ICAP server may have reported an error. ". Hauck 10:45, 12 April 2017 (UTC)

ICAP is used by proxy servers, i.e. servers that reside between your browser and the Web server that you connect to, in this case the ISFDB server. They can do a variety of things, including checking for viruses and restricting access based on content. I don't know much about them, but it sounds like some proxy server between your browser and the ISFDB server was confused by this particular submission. I have approved and massaged the submission; hopefully we should be OK now. Ahasuerus 11:32, 12 April 2017 (UTC)
OK, thanks for having a look. Hauck 11:51, 12 April 2017 (UTC)

Comments

FYI: RecognizedDomains from common/library.py could do with a cleaning. Specifically, once such links have been expunged, we can remove the dead sites "sff.net" and "shelfari.com" (I may have put in author updates to remove the last of these but they could be elsewhere as I only looked with advanced search which does not cover everything like publisher and series website links, etc.). Uzume 07:14, 17 April 2017 (UTC)

Good point, SR 115 has been created. On the development server, there are 30 records whose notes refer to sff.net, 10 Web pages with references to Shelfari and 148 Web pages with raw (i.e. not via the Wayback Machine) references to SFF.net. Ahasuerus 15:12, 17 April 2017 (UTC)
FYI: I noticed there are author email addresses that also refer to sff.net (however I cannot search those via advanced search). Does it make sense to merge author email addresses into webpages using the "mailto" URI schema (it should still be possibly to enter and display them differently/separately and merging solves the search issue)? Uzume 17:49, 17 April 2017 (UTC)
I think of e-mail addresses and Web pages as different entities, so I don't believe that it would be beneficial to put them in the same field. We could easily add "E-mail" to the list of searchable fields supported by the Advanced Author Search logic, though. Ahasuerus 18:09, 17 April 2017 (UTC)
I wasn't sure it sounded like a good idea myself. We should definitely get email addresses added to advanced search. Uzume 00:57, 19 April 2017 (UTC)

Could you perhaps briefly look at and comment on ISFDB:Community Portal#Variant Reviews? Thanks, Uzume 07:14, 17 April 2017 (UTC)

Response posted. Ahasuerus 15:12, 17 April 2017 (UTC)
Thanks, Uzume 17:49, 17 April 2017 (UTC)

(unindent) BTW, I noticed Schema:authors says we are using author_note over note_id. Would you care to explain the reason/history on that? Thanks, Uzume 18:54, 17 April 2017 (UTC)

The reason why all notes (and title synopses) were originally put in the "notes" table had to do with the origins of the ISFDB project. ISFDB 1.0 as created in 1995 was a hand-crafted C-based database. When the software was rewritten in 2004-2006 using Python and MySQL, some implementation details were carried over even though they were no longer needed. For example, the publisher merge code increments and keeps track of "MaxRecords" instead of using natively available Python functionality. Over the last 8 years I have rewritten much of the code, but some vestiges remain. Ahasuerus 19:24, 17 April 2017 (UTC)
(discussion of WSGI moved to Talk:Development). Ahasuerus 17:28, 20 April 2017 (UTC)
Similarly, back in the day it was very important not to add potentially lengthy fields to your core tables. They were relegated to separate tables to avoid performance problems, but the trade-off was increased code complexity. These days, "mediumtext" makes it a non-issue for most practical purposes, hence the use of "author_note" instead of "note_id" for authors. We have SR 110 to "Move all notes and synopsis fields to their respective records", but it will take a fair amount of work. Ahasuerus 19:24, 17 April 2017 (UTC)
That does sound somewhat painful. Uzume 00:22, 19 April 2017 (UTC)

I am still looking forward to a trans_series. Thanks, Uzume 18:54, 17 April 2017 (UTC)

I expect that we will first have to figure out what we want to do about series which have multiple names, including different names used in different countries. Ahasuerus 19:26, 17 April 2017 (UTC)
Do they really? Typically publications are in a single language (there are exceptions) and thus a series of publications (be they a series of works or a pub series or magazine, etc.) are also normally in a single language. That does not mean we do not have transliterations and perhaps even translations of such names. I do not see trans_series as much different than our current trans_pub_series. Uzume 00:22, 19 April 2017 (UTC)
It's not just transliterations, it's the whole issue of multiple series names like Pentagram (original) / The Gatekeepers (US) / The Power of Five. While title records support VTs and author records support pseudonyms, there is no "variant series" mechanism for series. We just 'chain' alternate series names using slashes instead. These alternate names can include US/UK variations, translations, transliterations, publisher-specific changes, etc. It's a mess and we'll need to sort it out at some point. Ahasuerus 00:59, 19 April 2017 (UTC)
Good point. Series and pub series should have a variant title type of mechanism someday. Currently series do have the sub-series mechanism. It might be possible to leverage such pointers in much the same way we used variant titles for translations, etc. Of course we might be able to devise a better solution too. It obviously needs thought. Uzume 01:31, 19 April 2017 (UTC)

Broken Edit

Please see this edit. Assume it's from bad HTML in the notes field, but no buttons available to process it. -- JLaTondre (talk) 21:37, 17 April 2017 (UTC)

That's right, it was bad HTML in notes. I have rejected it using "hardreject". Ahasuerus 21:46, 17 April 2017 (UTC)
I notified the submitter. Thanks. -- JLaTondre (talk) 22:00, 17 April 2017 (UTC)

suggestion about length mismatch warning

When varianting a serial episode to its parent SHORTFICTION, the length mismatch warning comes up -- there's no point to that. Vasha 00:53, 18 April 2017 (UTC)

FR 1042 has been created. Thanks! Ahasuerus 01:14, 18 April 2017 (UTC)

Record to delete manually?

Hello, I can't delete this record. You may have more success than me. Hauck 07:15, 18 April 2017 (UTC)

That was one seriously unhappy title record. RIP! Ahasuerus 13:57, 18 April 2017 (UTC)
Thanks. Hauck 14:45, 18 April 2017 (UTC)

Problem with a new publication

Hello and sorry to bother you but can you have a look at this new pub where there seems to be a problem (it seems to me with the new PV system). Thanks. Hauck 17:40, 18 April 2017 (UTC)

No worries, that's why I am here! The problem was caused by a new verifier who is yet to edit our Wiki -- the software didn't handle this scenario correctly. The bug has been squashed and we are hopefully back to normal. Thanks for reporting the issue! Ahasuerus 17:55, 18 April 2017 (UTC)
I supposed so. Thanks for the fix. Hauck 18:07, 18 April 2017 (UTC)

Currently no ordering of the items in the "Primary Verifications" section

Hi. Currently the query which retrieves the list of entries for the new Primary Verifications section has no ORDER BY, i.e. the items are displayed in arbitrary order (see the differences between 398951 and 362357). I think it'd be good to have a defined ordering, e.g. "ver_time ASC", which would re-establish the ordering like it was before (where PV1 was usually the oldest). Jens Hitspacebar 18:26, 19 April 2017 (UTC)

Done! Ahasuerus 18:47, 19 April 2017 (UTC)
Thanks a lot! Jens Hitspacebar 19:04, 19 April 2017 (UTC)

Broken link on EditTitle

Hello, Can you fix the link to the Notes help page in edittitle.cgi? It is missing the link - the question mark is just a picture :) Thanks! Annie 19:57, 24 April 2017 (UTC)

That's odd. When I pull up a title record in Edit Title, e.g. this one, the mouse-over help reads "A free text note describing this title". Do you recall which title you were editing when you encountered this problem? Ahasuerus 20:58, 24 April 2017 (UTC)
The mouse-over is there but it does not have the link attached that leads to a help page (here(from where one can find the link for the new templates :) - I need to click on the one from Synopsis and change the end of the address to get to the Note one). The newpub.cgi has the link connected properly. The edittitle does not. Annie 21:50, 24 April 2017 (UTC)
Oh, I see. Fixed! Ahasuerus 22:38, 24 April 2017 (UTC)
Awesome. Thanks a lot! Annie 22:40, 24 April 2017 (UTC)
Sure thing! And welcome back! Ahasuerus 22:49, 24 April 2017 (UTC)
Thanks! I will be back to the Fixer project in a couple of days - catching up on all kinds of stuff - I just left internet for 2 weeks. Annie 15:03, 25 April 2017 (UTC)

find_dups.cgi and the new template

Hello,

If you use some (all?) of the new template in a note, the find_dups.cgi script does not populate them. See this for an example -- search for {{Tr| Is that on purpose or just an oversight? Thanks! Annie 23:21, 25 April 2017 (UTC)

An oversight, now fixed. Thanks for reporting it! Ahasuerus 23:39, 25 April 2017 (UTC)
Thanks! :) Annie 00:02, 26 April 2017 (UTC)

A wrinkle to wrong-publication-type prevention

I don't know if this situation is ever going to arise again, but for completeness I will mention it... You added checks to prevent the "length" field from being set if the publication type was anything other than SHORTFICTION. However, I just merged two instances of a title -- one of them had been entered as POEM and one as SHORTFICTION with length "short story". I decided that POEM was correct and performed the merge, but accidentally (I immediately cancelled and corrected) chose to keep the length set to "short story". There was no rejection. So potentially the result could have been a POEM with length set. --Vasha 19:16, 5 May 2017 (UTC)

Good point, Bug 665 has been created. However, it's a fairly low priority because we have a nightly cleanup report which looks for titles with improperly entered "length" values. Ahasuerus 19:28, 5 May 2017 (UTC)

Deeplinking allowed

Hello, as per this submission we're authorized to deeplink to the spécified website. Hauck 17:39, 6 May 2017 (UTC)

Added, thanks! Ahasuerus 23:26, 6 May 2017 (UTC)

Devil World

Cover art for the 1st edition of this is by Enric, his signature can be seen on the original art here. (Different Enric art than on the Bantam 1985 edition). Horzel 09:02, 10 May 2017 (UTC)

Updated, thanks! Ahasuerus 14:15, 10 May 2017 (UTC)

Matching of diacriticals with not...

Would you do me a favor and see this and make sure I didn't give him a bum steer? It was spurred by submissions where he attempted to change the author credit to use a diacritical, but it matched to the original. --MartyD 01:24, 11 May 2017 (UTC)

Sure, I'll take a look. Ahasuerus 01:26, 11 May 2017 (UTC)

Arthur C. 'Ego' Clarke

In this edit, I only changed the titles from "April 1941" to "April 1942". However, it also changed Arthur C. "Ego" Clarke to Arthur C. 'Ego' Clarke. I've tried editing the essay title record to change it back with no luck. I enter the double quotes and it auto changes them to single ones. We now have a Arthur C. "Ego" Clarke and a Arthur C. 'Ego' Clarke when we should only have the former. -- JLaTondre (talk) 00:27, 12 May 2017 (UTC)

Way back we discussed quotes in author names and decided that only single quotes should be allowed. For this reason the software automatically converts double quotes in author names to single quotes at submission creation time. Or at least that's the theory -- apparently at least one data entry form contains a bug which lets double quotes slip in. I'll see if I can track it down... Ahasuerus 00:36, 12 May 2017 (UTC)
Hm, there are no submissions with Arthur C. "Ego" Clarke in the submission body. Curiouser and curiouser... Ahasuerus 01:24, 12 May 2017 (UTC)
It was my submission. I noticed it's now resting comfortably in my Errored Out Submissions, if that's of any help. Doug / Vornoff 23:16, 12 May 2017 (UTC)
Yes, indeed, I was just about to post on your Talk page! The double quotes around "Ego" were encoded in the database, which is what threw me off. I now have a reasonably good idea what happened, although I will need to run additional tests. Thanks for helping debug this problem! Ahasuerus 23:22, 12 May 2017 (UTC)
No problem. It was driving me crazy trying to fix it, to no avail. However, I did learn someting about quotes at least. Doug / Vornoff 23:27, 12 May 2017 (UTC)

Language of a pseudonym

I have a question after this discussion. Boukje Balder wrote in Dutch for the first years of her career. Then she changed the credit for her work to Bo Balder, and wrote in English from then on. Is it possible to have the language of the pseudonym as Dutch and the canonical name as English, or does she pop up on a cleanup report then? The same thing happened with Raymundus Joannes De Kremer, who wrote in French as Jean Ray and in Dutch as John Flanders. At the moment the language for both is French. Thanks, --Willem 20:52, 15 May 2017 (UTC)

If there is a language mismatch between an author and one of his/her pseudonyms, it will appear on one of the cleanup reports. However, a moderator will be able to "ignore" the mismatch, so it shouldn't cause an issue. The "ignore" functionality was added a while back when we realized that some contributors to collective pseudonyms had different working languages.
Also, keep in mind that the working language of a pseudonym doesn't really control anything. It's merely displayed on the pseudonym page, so you can tweak it to your heart's content :-) Ahasuerus 21:17, 15 May 2017 (UTC)
Thanks. They will show up on the next cleanup report, and should be ignored. I'll check again this evening (for me that is) --Willem 09:55, 16 May 2017 (UTC)

remove container title

When removing titles from publications, why is there the option to remove container titles? Shouldn't that be impossible since (as I understand it) you now have things set up so there must be exactly one container title at all times? Vasha 14:02, 16 May 2017 (UTC)

True, but if there is a mismatch between the publication type and the title type, the software may not be able to correctly identify the container title.
We have added a significant number of checks and balances over the last few years, but things can still get out of whack due to out of order submission approval and such. Ideally, a publication's "reference title" would be handled behind the scenes without human interaction, but that would require a major revamp... Ahasuerus 15:09, 16 May 2017 (UTC)

Cleanup report: "Chapbooks without Contents Titles"

This cleanup report shows three items, including two graphic novels. The graphic novels do, in fact, have their contents included -- it's just that as graphic novels, those contents are interior art. Should the cleanup report be changed to recognize that "graphic format" works only need an interior art credit? Or is there another way to handle these cases? Chavey 02:57, 18 May 2017 (UTC)

Graphic novels should still have a fiction record (with the 'graphic format' flag set) for the text portion. If artist credit is separate from the writer's credit, it would then also have an interior art record. See this example. -- JLaTondre (talk) 11:12, 18 May 2017 (UTC)
Right, that's how it works. Ahasuerus 14:42, 18 May 2017 (UTC)
However, in the case of The Stand: Hardcases, I question if that one should even be in the database. Per Amazon's Look Inside, King is listed only as "Creative Director & Executive Director" with script by Roberto Aguirre-Sacasa, art by Mike Perkins (who is not listed in our pub record), and color art by Laura Martin. It doesn't look like King had an author role on this nor is it using his text. As such, I don't believe it is illegible for inclusion. -- JLaTondre (talk) 11:12, 18 May 2017 (UTC)
Exactly my opinion. It's another of Susan's (IMHO) borderline additions. Hauck 11:49, 18 May 2017 (UTC)
I agree that the book seems ineligible based on the current rules. Ahasuerus 14:42, 18 May 2017 (UTC)
R.I.P. Hauck 15:08, 18 May 2017 (UTC)

Title Note Template Expansion

Minor issue, but when submitting a New Publication, templates are not being expanded in the TitleNote field during approval (they are expanded in the actual record once approved). See this submission where the same Tr template was used in both the TitleNote and pub Note. Only the pub Note one is expanded. -- JLaTondre (talk) 12:51, 20 May 2017 (UTC)

Bug 668 has been created, thanks! Ahasuerus 14:19, 20 May 2017 (UTC)
Fixed. Ahasuerus 21:06, 14 June 2017 (UTC)

Oddity in Cleanup report "Authors That Exist Only Due to Reviews"

In general, if a cleanup report has been cleared so that no more problems exist, the cleanup report says something like "No pseudonyms with canonical titles found". With the "Authors That Exist Only Due to Reviews" report, it just gives a blank screen, or more precisely a single table row of blue background and nothing in it. No biggie, but it feels weird. I could find no other cleanup reports that did this. Chavey 14:24, 23 May 2017 (UTC)

It was one of the original cleanup reports, so allowances have to be made :) I have created Bug 669 and will take care of it once I finish the External Identifiers patch. Almost there... Ahasuerus 15:15, 23 May 2017 (UTC)
Fixed. Ahasuerus 22:02, 26 May 2017 (UTC)

JPNO

In converting external identifiers, I ran across another - JPNO. There are between 100-200 of them[15]. -- JLaTondre (talk) 13:20, 27 May 2017 (UTC)

"JPNO" appears to be the same as JNB, i.e. "Japanese National Bibliography", which is already supported. I have added 80016068 as a JNB number to this pub and the two links take you to the same Web page. Perhaps we should change the identifier type name from "JNB" to "JNB/JPNO"? Ahasuerus 15:40, 27 May 2017 (UTC)
Sounds reasonable based on this. -- JLaTondre (talk) 20:45, 27 May 2017 (UTC)
Done! Ahasuerus 21:43, 27 May 2017 (UTC)
It might be good to mention somewhere that JNB/JPNO point to the National Diet Library (NDL). Is it okay for me to edit Template:PublicationFields:ExternalIDs? I could even format the entries so they aren't just in plain text. ···日本穣 · 投稿 · Talk to Nihonjoe 04:38, 31 May 2017 (UTC)
Please do! Ahasuerus 04:47, 31 May 2017 (UTC)
Okay, done. Just need links to all of them. ···日本穣 · 投稿 · Talk to Nihonjoe 18:56, 1 June 2017 (UTC)
Looks good, thanks! Ahasuerus 22:25, 1 June 2017 (UTC)
I added links for everything but B&N. We can always update them with more specific links if needed. ···日本穣 · 投稿 · Talk to Nihonjoe 23:59, 1 June 2017 (UTC)

Interesting warning message

over here. I am trying to remove this ISBN :) I do not think that it should be warning me that the old one is "13-digit ISBN for a pre-2005 publication":) Annie 17:40, 30 May 2017 (UTC)

Fixed. Thanks! Ahasuerus 21:30, 30 May 2017 (UTC)

Possible Fixer errors in publication types

I've so far found two recently-added collections or anthologies that were in the database as novels (this one and this one pending correction, to be precise). Does Fixer often make mistakes of that sort? --Vasha 15:54, 1 June 2017 (UTC)

Unfortunately, Fixer has no way of telling whether a publication is a COLLECTION/ANTHOLOGY. Amazon provides some information about pub types, but it's unreliable and inconsistent. After multiple attempts to get Fixer to guess correctly, I gave up. The only distinction that Fixer makes is between pubs with 80+ pages and pubs with <80 pages. The former are submitted as NOVELs while the latter are submitted as CHAPBOOKs. It's not guaranteed to work correctly, e.g. some Amazon omnibuses have a page count of 1 for some reason, but it works in 98%+ of all cases. Other than that, publication type validation and massaging is up to the approving moderator. I'll go ahead and update Help:Screen:Moderator to reflect Fixer's current logic. Ahasuerus 16:46, 1 June 2017 (UTC)
Any chance you can change the 80 pages to 100? That will eliminate a lot of conversions needed later - and chances of anything under 100 pages being a novel are still slim (and it catches the 96-pagers). I'd even say 120 but won't push it. Annie 17:02, 1 June 2017 (UTC)
It would be easy to do, but we would presumably also need to change this cleanup report to look for NOVEL pubs with <100 pages. It would also re-open the old and never settled issue of juvenile novels. A lot of them, including many with 128+ pages, have <40K words and we were never able to reach consensus re: their type. We used to joke that we have two types of editors: "lengthists" and "bookists" :) Ahasuerus 17:23, 1 June 2017 (UTC)
I cannot see that report - not being a moderator. Ah, the juveniles. So let's reopen the discussion then? The 128+ pages do not bother me that much - but anything under 128 is not legitimately a novel unless if it is some weird small letters editions. Annie 18:47, 1 June 2017 (UTC)
Well, as per Help:
  • novella - A work whose length is greater than 17,500 words and less than or equal to 40,000 words. (Roughly 50 to 100 pages in a book.)
Unfortunately, there are quite a few books/magazines with a non-standard number of words per page, which makes it a guessing game. Some public domain reprints manage to squeeze a bona fide novel into 70ish pages. (I don't envy their customers!) Other times a 220 page book turns out to be a novella because it's an extra-large print edition.
I try to handle almost all of Fixer's borderline cases on my own and I use Look Inside to approximate word counts. When in doubt, I add notes, so the results are usually in the ballpark. Ahasuerus 21:55, 1 June 2017 (UTC)
And here comes the worst example yet: Boomers for the Stars, 362 pages, est. 110 words per page, <40K words! Ahasuerus 21:36, 9 June 2017 (UTC)
I really hate self-publishing (when I am cataloging that is) :). I am always weary when I see non-standard publishers publishing "novels" - way too many are novels only in someone's dream. This though IS excessive. Looking at look inside - that's worse than the usual wide spacing in children's books... Annie 21:47, 9 June 2017 (UTC)
I'd say that juvenile novels under 40K can stay novels if they wish to (but not my decision). Still - bumping those 80 to a bit higher will help a lot for all those self-published and kinda self-published things that are novels only because this sells better than a novella. I'd think that 128 may actually be a good watershed - 128 and higher - novels, 127 and lower - chapbooks. It still will miss some - but not as many as it does now... Annie 18:47, 1 June 2017 (UTC)
Searching for type=NOVEL and title contains "stories" came up with some 25-30 books that seem to be incorrectly set to NOVEL. What does that suggest about how many other books haven't had their default publication type corrected? Maybe a couple hundred in total? --Vasha 19:01, 1 June 2017 (UTC)
Well, some of them may be collections of linked stories which can also be considered NOVELs. For example, Weird Stories from the Lonesome Cafe appears to be an account of continuing adventures broken up into chapters. Other borderline cases include Haunted: A Novel of Stories, which was nominated for the Stoker award in the collection category. I am sure there are some clearly miscategorized records, but it's hard to estimate how many we currently have. Ahasuerus 21:38, 1 June 2017 (UTC)

Иар Эльтеррус (Iar Elterrus)

It seems like the works of Иар Эльтеррус would be right up our alley. I would enter them, but my knowledge of Russian is pretty limited. Do you want to take a crack at it? I ran across him when someone was asking about a book they bought because of the epic cover (this one). They had no idea what the book was or who the author was, so I did some hunting around until I found it. ···日本穣 · 投稿 · Talk to Nihonjoe 06:54, 11 June 2017 (UTC)

I try to help with the Slavic titles/pubs that we have on file, but, unfortunately, I don't have the bandwidth to work on entering new ones. FantLab alone has over 200,000 publication records; a manual reconciliation with our data would take tens of thousands of man-hours. Some of their data, especially non-Russian data, will require additional massaging. For example, consider the following FantLab record. It claims that this Polish collection was first published in 2004 and that it was reprinted in 2009 and 2014. On the other hand, Fantasta claims that the first edition of this collection appeared in 1999. It also lists the 2004 and the 2009 reprints, but it doesn't have the 2014 reprint on file. OCLC has records for the 2004 and 2014 editions, but not for the 1999 or the 2009 editions. And so it goes :-)
In the long run, we will want to automate the process of reconciling our data with third parties, be it the Library of Congress, OCLC, SFBG, FantLab, Fantasta or the Unified Catalog of Martian Libraries. Fixer already has supporting modules that can handle the Library of Congress, OCLC and a number of other publicly available catalogs. However, they can't be used until we finish the migration of external identifiers to the new fields.
We also need to implement certain other software upgrades to support Slavic records. Polish and especially Russian publishers have been known to use multiple ISBNs per book, so we need to accommodate that as well as books co-published by multiple publishers. It would also help to be able to access FantLab's and other bibliographic web sites' underlying databases in order to facilitate the reconciliation process, a separate can of worms.
It's all doable, but it will take some time, especially considering the fact that Fixer finds up to 10,000 new ISBNs every month. I don't know how many productive years I have left in me, but I'll do what I can. Ahasuerus 17:03, 11 June 2017 (UTC)
Aha. Anyone else here who likes to enter Russian records? ···日本穣 · 投稿 · Talk to Nihonjoe 17:38, 11 June 2017 (UTC)
Paging User:Anniemod‎ and User:Linguist :-) Ahasuerus 17:50, 11 June 2017 (UTC)
I will add it to my list of things to do if someone does not get there before me but... he is a relatively minor author and we are missing a lot of the big authors - either completely or they are awfully incomplete. So yes - he is right up our alley but so are almost all records in FantLab. :) Annie 21:09, 11 June 2017 (UTC)

Wiki vs external idenditfiers

Hi, which of the two is a blocker for future development? (or a bigger blocker if both are)? I am mostly back and had been chipping on some of those again when I have a minute or two but if one of them is more urgent, I can kick that a bit more. Annie 19:10, 12 June 2017 (UTC)

Welcome mostly back! :-)
Let's see:
  • "Publications with British library IDs in Notes" doesn't really block anything
  • "Publications with ASINs in Notes":
    • Once the current list has been processed, I will update the report logic to support "ignoring" publications. I will then change the logic to search for all occurrences of "ASIN" in publication notes, which will add a few hundred pubs to the report.
    • Once everything has been cleaned up, I will create a new cleanup report to look for recent e-books without ISBNs *or* ASINs.
    • Once that has been taken care of, I will modify Fixer to submit e-books without ISBNs, which has been the main goal of this exercise all along.
  • The Wiki-related reports do not block anything major development-wise. Once the data has been processed, we will be able to get rid of the "lexical match" links on bibliographic pages, but that's a minor tweak. I guess the Series/Magazine/Fanzine-related reports are the low-hanging fruit in this case since they affect fewer than 170 pages/records in toto. Also, once all records on the publication-specific report have been processed, we will be one step closer to getting rid of "publication tags", but that's a low priority issue.
Ahasuerus 21:27, 12 June 2017 (UTC)
The remaining Series, Magazines and Fanzines are not as low hanging as they seem. :) Low number, a lot of work left (creating issues for the Fanzines/Magazines for example - and I hate just creating empty issues so I dig out content and... it gets long) and too much data that needs sorting for the series :) I need to go and finish the last few Fanzines one of those days though. Will let you know when I manage to kill one of those categories for good.
ASINs it is then :) Annie 21:33, 12 June 2017 (UTC)
Sounds like a plan! Ahasuerus 22:09, 12 June 2017 (UTC)
None of the remaining pubs on the ASIN report are ebooks (all of them will end up being ignored - but I am still copying the different identifier out. You may want to add the rest of the ones that mention ASIN at all so we can get these out of the way and open the door for starting to work on the ASIN-less and ISBN-less e-books. Although I think that even for ISBN-based e-books, we may want to add the ASINs - especially with the new Amazon way not to show ISBNs even if they are there... Annie 15:28, 20 June 2017 (EDT)
Yup, it's on my list of things to do today. I just need to wrap up the Last User Activity bug, which is taking longer than expected because I am not used to Python's date/time library. Ahasuerus 16:25, 20 June 2017 (EDT)
Sorry, did not want to bug you :) Was not sure if you know the status of those 60 or so leftovers :) PS: I hate the Python datetime library with a passion - it is just a pretender - it think itself a library but it is so rudimentary that it hurts. Annie 16:55, 20 June 2017 (EDT)
No worries! I know just enough about Python to keep the project going, but I think I got it now. On to the cleanup reports... Ahasuerus 17:54, 20 June 2017 (EDT)
Done. Ahasuerus 19:05, 20 June 2017 (EDT)

How is your Uzbek? :)

Can you look at this and see if you agree with my reasoning? Languages changing writing systems are so much fun. Not. PS: This is this story. Annie 19:53, 13 June 2017 (UTC)

I agree that there is no perfect solution. What you are proposing should work, but we may want to check with User:Linguist just to be on the safe side. Ahasuerus 20:07, 13 June 2017 (UTC)
I will call him in here :) It all is reversible if we decide to do something else.. Annie 20:25, 13 June 2017 (UTC)
I agree (men roziman / мен розиман) with the use of the Cyrillic here. It was the official spelling when these guys were active, and one of them (Ходжиакбар Шайхов) at least was born after 1940. I don't know about Яшим Абдуллаев, but we can assume he was roughly the same age. Were we to discover he was born before 1940, as Annie says, it won't be much trouble to update the record; he doesn't seem to have been a prolific author (spec-fic-wise, anyway). Linguist 08:50, 14 June 2017 (UTC).
Sounds like a plan. We may want to start putting together "language-specific" pages with instructions for the funny languages - just in case we manage to get someone else that can add Russian titles (and Uzbek ones) - Uzbek is not the only language that had had interesting history and where the year of the writing and/or writers birth date will define what is being used... Annie 17:13, 14 June 2017 (UTC)

Edit Python Pub Error

Editing pubs (at least the notes) is giving me the following Python error:
<type 'exceptions.NameError'>: global name 'SubMap' is not defined
Thanks. -- JLaTondre (talk) 00:37, 14 June 2017 (UTC)

However, it is accepting the edit. It's just not going to the moderator screen. Get an error screen instead. -- JLaTondre (talk) 00:40, 14 June 2017 (UTC)
And it seems to be happening with every edit, not just an edit pub. -- JLaTondre (talk) 00:48, 14 June 2017 (UTC)
Same here. Just hitting [submit] triggers the error. --~ Bill, Bluesman 00:50, 14 June 2017 (UTC)
And at the moment, there are NO new submissions in the queue which may mean non-mods aren't getting anything through. --~ Bill, Bluesman 00:52, 14 June 2017 (UTC)
No other functions seem compromised ... yet! --~ Bill, Bluesman 00:58, 14 June 2017 (UTC)
Investigating... Ahasuerus 01:11, 14 June 2017 (UTC)
Fixed. Sorry, apparently I need new glasses :-( Ahasuerus 01:21, 14 June 2017 (UTC)
Quite fine, it was just so peacefully quiet there for a half hour ........ ;-)))) --~ Bill, Bluesman 01:22, 14 June 2017 (UTC)
I should have known better than to go for a walk after installing (what was supposed to be) a trivial patch! ;-) Ahasuerus 01:27, 14 June 2017 (UTC)

Cloning External Identifiers

When cloning, could external identifiers be made an option (like coverart, etc.), but defaulted to off? Currently, they are copied over, but many times that is not the correct thing as they are specific to an edition. Thanks. -- JLaTondre (talk) 21:41, 21 June 2017 (EDT)

An excellent idea! Will do! Ahasuerus 21:49, 21 June 2017 (EDT)
Done. Ahasuerus 22:46, 22 June 2017 (EDT)
Thanks! -- JLaTondre (talk) 17:32, 23 June 2017 (EDT)

Fixer question

Quick question - when dealing with Fixer entries, if it is a collection/anthology/magazine, and I cannot find the content online (and it is too early for LookInside), do we add it as an empty stub only? It probably is a stupid question and I am overthinking it but thought I should ask. Annie 13:55, 28 June 2017 (EDT)

That's right, we just create a stub. Eventually it (hopefully) gets populated by human editors. Fixer tells me that he appreciates all the hard work that humans do to make his data even better! Ahasuerus 15:06, 28 June 2017 (EDT)
OK - I will stop overthinking it - if it cannot be finished, it goes down as stub. Thanks! Annie 15:10, 28 June 2017 (EDT)
Don't forget to add it to the Anthology and Collection Tracker in that case! I recheck the ones with no or partial contents and add more information when I get a chance. I don't have time to add contents to all of them, but at least the tracker tells me how much work needs to be done. --Vasha 17:31, 28 June 2017 (EDT)

(unindent) Another one - I can see a lot of Fixer entries in the queue from your name as opposed to from the name of Fixer. Do you reserve them for yourself this way or do you accept help from other moderators with them and you just post from both names (for some reason)? Don't want to step on people's toes by mistake... :) Annie 21:58, 7 July 2017 (EDT)

A submission created by one moderator can't be approved/rejected by another moderator. Or at least that's the theory -- if you find a way to do it, please let me know :-)
As to how I make the decision which account to use when creating a submission, it's really an art more than a science. There are many variables and trade-offs involved. One of the deciding factors is that I have accumulated in-depth knowledge of many publishers and their quirks over the last 9 years, so I know which ones are likely to cause problems. Light novels, translations and a number of other areas have been known to cause issues as well.
On the other hand, one could argue that by shielding moderators from more complex cases I prevent them from acquiring the kind of experience that could enable them to take over more of the workload going forward. On the gripping hand, my attempts at knowledge transfer have had mixed success. A bit of a conundrum, really... Ahasuerus 22:15, 7 July 2017 (EDT)
Aha. I did not even try to work on another moderator's submission, not even sure if I read somewhere that I cannot - I just assumed this is not what you do and that is that. Just saw those today and thought I would ask if I can help. Guess you are on your own then with them. Sorry for the stupid question - newbie and all that. :)
As for the shielding... how about creating a second robot account to be used for those problematic ones. Moderators will know which are regular Fixer and which ones are the ones you usually handle so they need to be extra careful around them. Maybe that will allow to start offloading more of these? Because noone will be able to step in if they never had to deal with a messy publisher and even messier Amazon data(I can't even remember how many corrections I had submitted to them through the years - they are either good or horrible in their data - almost with no middle ground). As much as I appreciate that you can handle all, as we had discussed before, the number of published books keep increasing and someone needs to help - or things just get backed up (and I need to go back to the public Fixer project so we can try to kick that one off as well). Annie 22:31, 7 July 2017 (EDT)

excess space found

There's an unnecessary space after the interviewer name in interview credits: "(interviewed by AUTHOR )" --Vasha 18:36, 30 June 2017 (EDT)

Thanks! The same thing happens with "(reviewed by AUTHOR )". Surprisingly, it's not a trivial fix due to the underlying design, but I'll see what I can do. Ahasuerus 21:54, 30 June 2017 (EDT)
I have created a bug report so that I don't forget about it. Ahasuerus 19:22, 1 July 2017 (EDT)

Cleanup reports stuck?

By now all of the nightly ones are usually regenerated but tonight a lot of them are missing - including ones I am sure should be there. Did something got stuck somewhere? Thanks. Annie 01:51, 7 July 2017 (EDT)

They look OK this morning. Perhaps a temporary overload around 1am? Ahasuerus 09:28, 7 July 2017 (EDT)
Still a problem from where I am sitting (and I just checked from two separate computers on two different internet connections just iin case something weird happened in my home network). The whole set of Wiki cleanup and Pub/Title transliteration reports did not regenerate at all (missing from the normal list; show 0 in the full list). Plus at least some of the Author language reports are missing and I am sure a few more here and there. Are you seeing something else? For example Other Titles without Transliterated Titles should have a few Croatian titles I did not get to yesterday, the Arabic and a few more exotic scripts. And I am sure we did not miraculously cleaned those or the wiki in the 10 minutes between the time I looked before the regeneration and the regeneration :) Annie 10:11, 7 July 2017 (EDT)
Oh, I see. It turns out that "Mismatched OCLC URLs in Publication Notes", which had a problem on the display side, had the same problem on the generation side. Who knew?? Give me a few minutes to fix it... Ahasuerus 10:20, 7 July 2017 (EDT)
OK, the bug has been fixed with extreme prejudice. I could force report regeneration, but I am not sure it's worth the performance hit. Ahasuerus 11:20, 7 July 2017 (EDT)
Great :) No, no need to regenerate now I think, tonight will be fine. There is enough work on the already generated ones if someone have the time and inclination to work on something. Plus with very small exceptions, the transliteration and wiki ones do not get much helpers anyway. Annie 11:56, 7 July 2017 (EDT)

Strikethrough in titles

If someone doesn't do anything with it before you get a chance, would you take a look at this submission? Mark-up in a title doesn't seem like such a great idea to me, but I'm not necessarily keeping up with your zillions of enhancements/improvements.... Thanks. --MartyD 14:31, 9 July 2017 (EDT)

The big security patch that I have been working on for the last few months will disallow HTML in all fields except Notes. I would advise against using it in regular fields since it will be zapped Real Soon Now (tm). Ahasuerus 14:37, 9 July 2017 (EDT)
And now it landed here... Annie 15:32, 10 July 2017 (EDT)
Fixed! Ahasuerus 16:23, 10 July 2017 (EDT)
Do we have a complete list somewhere on what we allow as HTML now? Annie 16:29, 10 July 2017 (EDT)
I thought we did, but apparently not... <click-click-click> And now we do! Ahasuerus 16:43, 10 July 2017 (EDT)
Awesome. Thanks! Annie 16:54, 10 July 2017 (EDT)
That page states that italics should use the <i> tag. In the past I always used <em> instead because I was accustomed to that from my blogging platform. Do you have plans to systematically replace all <em>? —The preceding unsigned comment was added by Vasha77 (talkcontribs) .
Nope! My primary concern is database integrity and security. Something like "table style=..." is a huge security hole, which we need to plug sooner rather than later (easier said than done, but I am working on it.) Stylistic issues like "i" vs. "em" pale in comparison :-) Ahasuerus 18:03, 10 July 2017 (EDT)

Cloning back in time submissions

When someone tries to clone with the new publication being older than the title they are cloning into, there is a very helpful warning. If I go and change the title date and then refresh the moderator approval screen, the warning is gone. So far, great. But if there are any contained titles (contents, covers, you name it), even if I fix the dates and refresh, the warning is still there. Is there any way for that to be changed/fixed? Same applies to imports by the way - the reference title calculates its time issues in real time; the contents stay as they were when the submission was created. Annie 22:12, 10 July 2017 (EDT)

Unfortunately, the underlying problem is somewhat messy. ClonePub and EditPub submissions take a snapshot of the dates, titles and authors of Content records. However, the submission review process only uses the captured data for certain display purposes while the filing process uses fresh data from the database. I need to review the display logic carefully and get rid of the parts that use the captured data. It's certainly doable, but it's not trivial. I think it would be best to create a bug report to ensure that things wouldn't get lost in the shuffle. Ahasuerus 00:09, 11 July 2017 (EDT)
For some reason I expected that there is something like that happening. Oh well. Will create the report in the morning. Thanks. Annie 00:12, 11 July 2017 (EDT)

Weird display with awards

Look at the awards here. It looks like the same novel won the same award twice (it was the usual case of the French splitting the novel so the award is assigned to the two separate volumes there so the data is correct but if you look at this page, nothing will tell you that and it does look weird). Is that by design? Annie 04:50, 11 July 2017 (EDT)

The ability to associate awards with VTs/translations is fully supported, but I don't think we have considered how it would affect books which have been translated as 2+ separate volumes. Perhaps we should display the VT's title in the "Awards" table -- if and only if it differs from the canonical title. Ahasuerus 09:07, 11 July 2017 (EDT)
Or, if it's a translation that's been split into two or more volumes in translation, have a note field that explains it was given to a specific part (just list the part). ···日本穣 · 投稿 · Talk to Nihonjoe 10:44, 11 July 2017 (EDT)
In this case it was given to both volumes together. And because of the way we handle the case, that meant 2 variants having the same award - thus the weird display. If it was just 1 part that got the award, we would not have an issue. :) Annie 11:10, 11 July 2017 (EDT)
That may be the easiest way - show the title if different. Maybe use the "as" format? Annie 11:10, 11 July 2017 (EDT)
Done. Ahasuerus 18:24, 11 July 2017 (EDT)
Looks good, thanks! Annie 19:04, 11 July 2017 (EDT)

New Pub Error

I accepted one of Fixer's submissions & it resulted in a Python error. It was a new pub submission and the book was this one. The error received was: <type 'exceptions.IndexError'> Python 2.5: /usr/bin/python Wed Jul 12 18:34:55 2017 A problem occurred in a Python script. Here is the sequence of function calls leading up to the error, in the order they occurred. /var/www/cgi-bin/mod/pa_new.cgi in () 694 print "<hr>" 695 696 Record = DoSubmission(db, submission) 697 698 print "<hr>" Record undefined, DoSubmission = <function DoSubmission at 0x9004d14>, db = <_mysql.connection open to 'localhost' at 8e0eb3c>, submission = 3512057 /var/www/cgi-bin/mod/pa_new.cgi in DoSubmission(db=<_mysql.connection open to 'localhost' at 8e0eb3c>, submission=3512057) 487 for author in authors: 488 data = XMLunescape(author.firstChild.data.encode('iso-8859-1')) 489 addPubAuthor(data, Record) 490 491 ########################################################## global addPubAuthor = <function addPubAuthor at 0x9004bfc>, data = 'Jr.', Record = 625512L /var/www/cgi-bin/mod/pa_new.cgi in addPubAuthor(author='Jr.', pub_id=625512L) 164 author_id = record[0][0] 165 else: 166 author_id = insertAuthorCanonical(author) 167 168 ############################################## author_id undefined, global insertAuthorCanonical = <function insertAuthorCanonical at 0x90043ac>, author = 'Jr.' /var/www/cgi-bin/mod/common.py in insertAuthorCanonical(author='Jr.') 405 # If the last segment is a suffix, skip it and get the previous segment 406 if lastname in ('Jr.', 'Sr.', 'M.D.', 'Ph.D.', 'II', 'III', 'IV', 'D.D.', 'B.Sc.', 'B.A.', 'M.A.'): 407 lastname = fields[-2] 408 # Strip trailing comma 409 if lastname[-1] == ',': lastname = 'Jr.', fields = ['Jr.'] <type 'exceptions.IndexError'>: list index out of range
That is all jammed together because I put nowiki tags around it. If you look at it an edit page, you'll see it with line breaks. I don't see the approved submission in the Recent Edits page so whatever failed caused it to not show up there. I'm leaving the pub as is in case there is something that helps you track down the problem, but once you have looked at it, I'll fix the pub record to comply with our standards. -- JLaTondre (talk) 18:42, 12 July 2017 (EDT)

Thanks, I'll take a look. Ahasuerus 18:44, 12 July 2017 (EDT)
Looking at it more, it is not showing up on China Miéville's page. And looking at Tony Venezia's page, it shows up as a stray publication. Not finding it with title search either. It seems the title record did not get created. -- JLaTondre (talk) 18:45, 12 July 2017 (EDT)
Ah, I was wondering where this one disappeared. I looked at it, decided I do not have the time for it and closed it and then a few minutes later it was nowhere to be seen. I do remember seeing an author called just "Jr." though as part of the list of authors (and I remember because I actually went to Amazon to see what is that all about - Amazon has it this way. :) Annie 18:49, 12 July 2017 (EDT)
Not sure if it has anything to do with the issue, but it was entered as a NOVEL instead of NONFICTION. ···日本穣 · 投稿 · Talk to Nihonjoe 19:50, 12 July 2017 (EDT)
Unfortunately, Fixer has no way of telling that a book is NONFICTION or even COLLECTION (at least not reliably.) His submissions always start out as NOVELs. Ahasuerus 20:17, 12 July 2017 (EDT)
I tried fixing that, but I kept getting an error stating there needed to be a content title and type that matched the publication title and type. So, I submitted a brand new clean version that had all the contents added, too. ···日本穣 · 投稿 · Talk to Nihonjoe 19:50, 12 July 2017 (EDT)
If you look at the broken one, it is a Pub record with no Title record. This is why could not just change it. So you cannot change it because it is broken -- but it broke because of something weird with the authors as it seems like. Had it been a complete record, you were going to be able to change the type :) Annie 19:53, 12 July 2017 (EDT)
I wonder if it has anything to do with having too many authors? I've not seen too many publications with that many authors. Also (probably not related), the author template doesn't work with China Miéville's name as it changes the "é" to ".C3.A9", which doesn't match what's in the database. ···日本穣 · 投稿 · Talk to Nihonjoe 19:59, 12 July 2017 (EDT)
We have pubs with more than that. See my note about the lonely "Jr." and see the python error itself - I suspect that may be the culprit. Ahasuerus will let everyone know when he figures it out :)
For the template - do you want to start a new section here so Ahasuerus can see it and look at it? Annie 20:05, 12 July 2017 (EDT)
Done. ···日本穣 · 投稿 · Talk to Nihonjoe 20:15, 12 July 2017 (EDT)

(unindent) OK, the bug has been identified and fixed. "Jr.", which is on our list of "recognized suffixes", was the culprit. I have deleted the partial publication record and it looks like everything is back to normal. It should be safe to approve Nihonjoe's submission. Ahasuerus 20:14, 12 July 2017 (EDT)

Approved. And some surgery done on it (Publisher, Publisher series and a few capitalizations (not ones in book names) :). I will go tell Fixer about that one (because he submitted with the wrong name as well) Annie 20:21, 12 July 2017 (EDT)

Accented characters and the author template

The author template doesn't work with China Miéville's name as it changes the "é" to ".C3.A9", which doesn't match what's in the database. Created a new section per Annie, above. ···日本穣 · 投稿 · Talk to Nihonjoe 20:15, 12 July 2017 (EDT)

Unfortunately, Wiki templates do not play nice with the core ISFDB software when there are non-ASCII characters present. It was one of the reasons why we started the Wiki-to-database migration last year. Ahasuerus 20:21, 12 July 2017 (EDT)
Most wiki templates have an alternate syntax to handle those cases. It will be described on the template page. Template:A supports the author's record number as the third argument. So {{A|China Miéville|2180}} gives China Miéville which works. I forgot to do that in the prior section. -- JLaTondre (talk) 20:30, 12 July 2017 (EDT)
Oh yes, I forgot about that! Ahasuerus 20:37, 12 July 2017 (EDT)
Okay, thanks. I fixed the one in the previous section. ···日本穣 · 投稿 · Talk to Nihonjoe 20:46, 12 July 2017 (EDT)

Fixer and e-books

Hi,

Is there any chance that Fixer can check for ASIN match if it cannot find the ISBN when the format is ebook? I had 2 cases with the Fixer books today where the ISBN was in the clear but the book was there, complete with ASIN but lacking ISBN (human entered). We probably need these recorded somewhere (the existing pubs need the ISBN added to them). If it cannot be done easily enough, that's fine but I thought I should ask. We are bound to get a lot more like that going forward... Annie 14:40, 18 July 2017 (EDT)

Good point. Now that the latest Amazon emergency is over and I am in the "catching up with Fixer" mode, I have more time to think about this (and related) issues.
It would be possible to bribe Fixer to create "PubUpdate" submissions to add the missing ISBNs to existing publication records. First, however, I'd like to generate a list of matches to see how many records we will be dealing with and how consistent they are. I'll see what I can do over the next day or two when I am not busy working on Fixer's other tasks. Ahasuerus 15:17, 18 July 2017 (EDT)
Agree that we should do a list before bribing Fixer :) It is not urgent - it was just a thought after deleting a few new pubs before realizing that we have them :) And I am chipping at the ASIN-less ISBN-less ebooks slowly so more ASINs are added daily as well (not that it will make a difference for the numbers here too much but still...)
I had been thinking about the opposite problem - e-books with ISBN but no ASIN in our system. Can we do something to grab those ASINs? Or at least to find out how many of them we have? Because once we get the ASIN-less and ISBN-less e-books and deal with them, we will start hitting the case where ISBN was there once upon a time but now Amazon is playing dumb (or it was never there for Amazon but was in the Look Inside so was added) but the ASIN was not - opening the door for the duplicates again. Don't you just love e-books some days? Annie 16:03, 18 July 2017 (EDT)
True enough. Let's see... As of this Saturday morning we had 7,199 pubs with an ISBN-13 and no ASIN. We also had 608 pubs with an ISBN-10 and no ASIN. Spot checking suggests that about half the time the data had come from Amazon. We also had data from Smashwords, publishers' web sites, OCLC, DNB, etc. It sounds like a cleanup report is order, although I expect that the "ignore" option will be used liberally.
I'll check the other type of mismatches next. Ahasuerus 17:28, 18 July 2017 (EDT)
There are 578 publications which:
  • have an ASIN external ID
  • do not have an ISBN in the publication record
  • have a matching ISBN in Fixer's database
We can't create a regular cleanup report for them because Fixer's database is not available on the main server. However, I could generate a list of matches and post it on the Wiki side. We could them examine each match, update the data and delete any duplicate records we may have. 578 is not too bad in the grand scheme of things. Ahasuerus 18:51, 18 July 2017 (EDT)
Get me the list on the wiki side and I will reconcile them... And we may need to do that a few more times for new ISBNs that Fixer finds or make sure that Fixer does the check before it posts? Meanwhile, I caught 4 more in the current batch (3 ended up with deletion, one I actually saw early enough to reject Fixer's record).Annie 18:57, 18 July 2017 (EDT)
Will do -- thanks for volunteering! I'll post an update on the Moderator Noticeboard once the list has been made available. Ahasuerus 19:07, 18 July 2017 (EDT)
It is annoying to work on the Fixer submissions today - 4 more were duplicates from the last 10 or so I dealt with. So the faster we can reconcile, the easier for everyone :) And less annoying. Annie 19:12, 18 July 2017 (EDT)
Yeah, sounds like fun again. Any chance we can get Fixer (or a brother of his) to grab ASINs for ISBNs? There will be a lot that won't get ASINs (although a lot of the DNB ones might - they may had come from DNB but they are most likely in amazon.de as well. Annie 18:06, 18 July 2017 (EDT)
I think it should be doable. Fixer's database is slightly behind the main database, but the way the filing software works, it shouldn't create a duplicate external ID even if one is submitted. I'll see what I can do. Ahasuerus 19:09, 18 July 2017 (EDT)

(unindent) How about stopping Fixer from submitting e-books temporarily until we reconcile the DBs? It is not even funny - take a look at my current streak of rejections... Annie 19:42, 18 July 2017 (EDT)

Sounds like a plan! Ahasuerus 19:48, 18 July 2017 (EDT)

You are wanted on CP

Awards display oddity over here. Thanks! Annie 16:51, 20 July 2017 (EDT)

Noon: 22nd Century

Entered the contents for [this] but since my Russian is limited to an occasional hankering for vodka, I have no hope of matching/varianting the titles [though it doesn't appear we have that many of the short fiction entered in either language]. Could you have a quick peek? Thanks! --~ Bill, Bluesman 22:55, 29 July 2017 (EDT)

Varianted and cleaned up. Ahasuerus 23:53, 29 July 2017 (EDT)
Looks like it should! Thanks. --~ Bill, Bluesman 11:27, 30 July 2017 (EDT)

The OCLC Control Number (OCN) has reached 1 billion!

Received this today; not sure if will impact what we do here:

The OCLC Control Number (OCN) has reached 1 billion!

This sequentially assigned number associated with a WorldCat record is assigned when a record is created or loaded from an external source. The one billionth assigned OCN was for a digitized image that came from Chiba University Library in Chiba, Japan: http://www.worldcat.org/oclc/1000000000

What action should you take?

As part of our ongoing communications since 2013, if you use WorldCat records you need to check your systems to ensure you are able to successfully handle the longer OCN. To help you with this change we have provided sample files and instructions for testing: https://www.oclc.org/support/services/batchload/controlnumber/number-expansion.en.html

Albinoflea 18:57, 8 August 2017 (EDT)

Thanks for the heads-up! Our maximum supported length of external identifiers (like OCNs) is 65,535. OCLC will have to add a lot more records before we need to start worrying :-) Ahasuerus 19:24, 8 August 2017 (EDT)

Image permissions

I know it's in our 'non-help' pages somewhere ... but from [here] I can't find the actual permission letter for access to other sites. In the queue is a submission from someone/site called pulpscans which doesn't even trigger our 'we-don't have permission/etc' error message. Probably because the link seems dead [I can't even get the site to open from another window]. Like to leave a note on the contributor's page but have nothing to show the required process. Brings up a larger note: I've been here nine years and I still can't find things on the Wiki. The link to the permission letter should be on the Help page noted above, but it isn't. Ignoring for the moment that some of our help needs re-wording, there is a definite dearth of comprehensive links between different 'departments'[?]. Inputting 'images' into our search window gets one virtually nowhere, six pages for titles, an unknown number of pages of discussions. No user wants to have to sift thousands of lines of discussions in the hope they may find the magic link [I certainly don't]. I'm boggled that the same search doesn't at least take one to the page above [incomplete as it is]. In fact, I can't remember inputting any word/phrase into that search window and getting anything but discussions. That's not what should happen [at least not only what should happen]. Maybe I'm ranting, [steam has to go somewhere] but this is frustrating. Larger discussion needed, for sure. For now, where's the letter??¿¿?? ;-))) --~ Bill, Bluesman 19:58, 15 August 2017 (EDT)

Is this the submissions in question? If so, then the image is hosted by "pulpcovers.com" rather than "pulpscans". The permission to link to pulpcovers.com was given by the site's owner back in 2013 -- Ahasuerus 20:11, 15 August 2017 (EDT)
Yep, that's the one --~ Bill, Bluesman 20:50, 15 August 2017 (EDT)
here is the relevant portion of the owner's Talk page. To get to it from Help:How to upload images to the ISFDB wiki, click on ISFDB:Image linking permissions, which is linked in the first paragraph. Then search for "pulpcovers", which links to the permission. Ahasuerus 20:11, 15 August 2017 (EDT)
My eyes, was looking for the letter in the Contents part of the page. Not sure, then, if the permission is given why the submission won't show the image?? --~ Bill, Bluesman 20:50, 15 August 2017 (EDT)
I can see the submitted image clearly, so I assume that there is some kind of problem between your browser and pulpcovers.com. Have you tried clearing the browser cache and/or restarting the browser and the computer? Ahasuerus 21:16, 15 August 2017 (EDT)
Nope, refreshed, brushed the browser's teeth, still can't see the image. Time for an upgrade, methinks. My browser is still lightyears behind the 'Apps' phase. My philosophical problem in this is that there seems to be no way to keep Google from occupying one's machine. I'd gladly upgrade if I could keep them off. --~ Bill, Bluesman 21:26, 15 August 2017 (EDT)
Hm, that's odd. Can you see the main page, https://pulpcovers.com ? Ahasuerus 21:27, 15 August 2017 (EDT)
Nope, Safari refuses to connect. I know, schizoid system ... I'm still operating from 5.8. Lowest I can use and still keep Google off. Any later version the "Never" on the cookies frontier is a joke. Unless you know of a work-around?? I've had several [animated] discussions with MAC people on this and they are always astonished that they can't keep Google off, yet have no solutions. It's a back door, built into ever computer system from 2009 and later [PC or MAC]. Hope they paid a bundle .... doesn't solve the problem , though. Ideas? --~ Bill, Bluesman 21:36, 15 August 2017 (EDT)
I am afraid I know quite literally nothing about Safari except that it's a browser used by Mac users. As far as backdoors go, does Apple still support version 5.8 of Safari on your version of the operating system? If it's unsupported, then it may be open to attacks. Of course, the number of Mac-specific viruses is much lower than Windows-specific viruses, but there are some out there. Ahasuerus 22:24, 15 August 2017 (EDT)
What do you mean by "keep Google off"? ···日本穣 · 投稿 · Talk to Nihonjoe 02:54, 16 August 2017 (EDT)
When I last upgraded [to version 6.3[?]] if one set cookies to 'never' [which I always do after logging on here], Google could still drop one on the computer. If I just kept deleting it, every 20 minutes another one would be there = back door. Apple Store 'Genius' bar guys can't figure out a way to plug the hole. Apple won't admit it's there. So I reverted the OS by one version where I can control what cookies get on. I'm going to have to upgrade at some point, I already have difficulties with some sites, but it rankles me deeply that Google could drop a cookie on my computer at will. --~ Bill, Bluesman 16:26, 16 August 2017 (EDT)
Interesting. ···日本穣 · 投稿 · Talk to Nihonjoe 18:57, 16 August 2017 (EDT)
As far as the larger issue goes, unfortunately we have limited control over the Wiki software. We can do all kinds of things on the ISFDB side, but the Wiki side is completely different. It even uses a different programming language, which I don't know. Even if I knew it, I would still be unable to change it since it's a self-contained package.
We definitely need to upgrade to the latest version of the Wiki software, which may help with searching, but I know so little about it that I am afraid to touch it for fear of breaking things. I have asked Al to do it, but he hasn't had the time... Ahasuerus 20:11, 15 August 2017 (EDT)
Really? We seem to be able to change stuff in the help routinely, or is that just in the wording? --~ Bill, Bluesman 20:50, 15 August 2017 (EDT)
Basically, any Web page whose URL starts with "http://www.isfdb.org/wiki" is handled by the Wiki software. All other URLs that start with "http://www.isfdb.org" are handled by the core ISFDB software, which we have full control over.
The Wiki software lets us create, delete and edit Wiki pages, add users, ban spammers, upload/delete images and do all kinds of other things that its creators originally programmed it to do. However, we can't change the software itself. This means that we can't change the way it searches Wiki pages when you enter something in the Wiki search box. The only way to make Wiki searching better is to upgrade the Wiki software to a more recent version. It's like an old car: you can drive it around town and it has a radio and a CD player, but if you want fancier features like a built-in navigator, you need to upgrade to a newer car :-) Ahasuerus 21:23, 15 August 2017 (EDT)
I had no idea we were participating in two separate [yet linked] universes. ;-)) Makes sense, except in the execution of same ... it is our bailiwick, after all .... Al should come clean with his secrets or the differences between the DB and the Wiki could become way too wide. You can tell him I said so!! My address is still a secret, right??¿¿?? --~ Bill, Bluesman 20:50, 15 August 2017 (EDT)
Have no fear! Al is a peaceable man -- or at least he has never been convicted! Ahasuerus 21:25, 15 August 2017 (EDT)
I love the way I feel fuzzy/warm yet have the crap scared out of me .... you're a peach!! Al can pick whatever part of the fruit that serves him best! --~ Bill, Bluesman 21:29, 15 August 2017 (EDT)

Here's another series for you to read

How a Realist Rebuilt the Kingdom. ···日本穣 · 投稿 · Talk to Nihonjoe 13:44, 17 August 2017 (EDT)

It's good to see that J-Novel Club publications are being added to the database!
Unfortunately, the ISFDB leaves me little time for reading SF, but I try to keep track of relatively new developments like light novels, LitRPGs, etc. Web novels, which many light novels are based on, present particularly thorny bibliographic challenges which we will need to sort out before we can change our eligibility criteria. I also note that some Web novels get translated and published apparently without permission, which can result in oddball publications like this case. Ahasuerus 14:02, 17 August 2017 (EDT)

Data from Penguin website

I just noticed that Penguin had a list of science fiction/fantasy books across all their imprints which has ISBNs in the URLs. You can transform that into something to give to Fixer, right?

Thanks, I wasn't aware of that list. I'll check it against our database to see if we are missing anything. Ahasuerus 15:16, 23 August 2017 (EDT)
Done. The SF section of their site lists over 700 ISBNs, including 191 ISBNs not in our database. Of the latter, close to one half are collections of stories by mainstream authors like Mark Twain. They may contain one or two speculative stories, but there is no easy way of telling. Another big chunk consists of forthcoming (2017-11 through 2018-05) SF books, which I have added to Fixer's internal database. The rest are surrealistic collections and a couple paranormal mystery series, which Amazon lists under "cozies". I have added the last two to the database, so we should be in reasonably good shape. Thanks again for identifying the resource! Ahasuerus 20:22, 23 August 2017 (EDT)

I've been experimenting with harvesting and transforming URLs and I probably can find lists of data from all of the big publishers. What form of data would be most useful to you? --Vasha 14:31, 23 August 2017 (EDT)

What kind of data are we looking at? URLs, ISBNs, titles, authors, publication level details? If it's publication level details, would you be able to use them to create submissions, either manually or using the Web API? If you are comfortable with the Web API, please let me know and I will add you to the white list of editors authorized to use it to create submissions. Ahasuerus 15:16, 23 August 2017 (EDT)
Not familiar with Web API. But I can generate ISBN lists easily. Just a matter of looking in the right places on the publisher websites. --Vasha 15:32, 23 August 2017 (EDT)
More ISBNs is always good! Unfortunately, finding them is only the first step. Over the years Fixer has accumulated hundreds of thousands of WorldCat-, Amazon-, B&N-, etc-originated ISBNs. I'd love to have Fixer submit all of them, but we don't have enough moderator bandwidth to process them. There is a long list of things that a moderator needs to review when processing an automated submission -- see this Help section -- and it would overwhelm our moderator team. For this reason I have created an elaborate prioritization system, which I continuously fine-tune as the field changes. Ahasuerus 20:44, 23 August 2017 (EDT)
The Fixer description just says priority is given to major publishers and authors already in the database; is there more to it? Or is the "elaborate" part just deciding what's a "major" publisher? --Vasha 11:00, 24 August 2017 (EDT)
There is a surprising amount of complexity involved. For example:
  • Amazon has been known to change its records for older or subsequently reprinted books, so we need to be on the lookout for that.
  • Since Amazon doesn't display ISBNs for e-books, they are frequently entered without an ISBN and then Fixer comes along and creates a duplicate.
  • A number of publishers (like Yen On) make "preliminary" cover scans available and then change them right before the book is published.
  • Some academic publishers do not make cover scans available pre-publication, so their books have to be set aside and revisited later on.
  • There are other types of canceled or otherwise suspect pre-publication ISBNs out there and they require careful examination lest we pollute the database with vaporware.
  • Amazon and other sources occasionally misspell or otherwise mangle author names, titles, publishers, editors and everything else that can be messed up. Misspellings can result in major authors/publishers getting filtered out by Fixer's logic.
  • Amazon's API is... quirky, to say the least. For example, sometimes you ask it for one ISBN and it will return another, usually (but not always) a related one.
The list goes on and on. I try to automate these checks as much as possible, but there is only so much that can be done in the software. And so I end up reviewing/massaging thousands of ISBNs per month and then approving hundreds of them. Ahasuerus 12:59, 24 August 2017 (EDT)
Oh boy, you have my sympathy. I notice you don't mention the problematic nature of Amazon's publication dates, though. When entering data from Amazon I always try to get the date from the publisher; it's different from what Amazon states quite often. For instance, take this. Amazon is displaying a date of June 12; but the publisher announced its release on July 15 (and confirming that the book was not actually available when Amazon said, there's an author's blog post from June 19 saying that the book "will be published"). --Vasha 13:18, 24 August 2017 (EDT)
Yes, publication dates are a big can of worms. "Delayed publication" is a fact of life in this business and retailers don't always handle changed dates well. In Amazon's case the problem is exacerbated by the fact that they have multiple national stores, which handle delayed publication scenarios differently. Ahasuerus 14:00, 24 August 2017 (EDT)
User:Fixer/Public is my latest attempt at facilitating the process of entering Fixer-originated data into the database; I think it's almost ready for a public rollout.
Having said that, we may want to examine if there may be ways to support your ISBN identification effort on the software side. For example, would it help to create a Web page that would take a list of carriage return-delimited ISBNs, check it against the database and display a list of missing ISBNs? Ahasuerus 20:44, 23 August 2017 (EDT)
That ISBN-checking function would be excellent, yes. --Vasha 10:28, 24 August 2017 (EDT)
OK, I have created FR 1091 in support of this functionality. We will need to discuss the details on the Community Portal before it's implemented.
Also, please don't add ISBNs directly to Fixer's "public" pages. They are generated via a special process, which marks each added ISBN as "turned over to the public" in Fixer's internal database. I have updated the status of the added ISBNs in Fixer's database manually, but we'll need to be careful to avoid disconnects going forward. I plan to post the proposed process on the Community Portal shortly. Ahasuerus 13:37, 24 August 2017 (EDT)

(unindent) Macmillan does not have a list of SFF books but rather tags their books with dozens of different designations like "Fantasy / Epic" and "Fantasy / Urban." So it's more work to check that! I did go through and grab the ISBNs from every tag that had something to do with speculative fiction. Minus duplicates, graphic novels, and promo materials it came to 5,421 ISBNs. (One reason there's a lot is that Tor is a Macmillan subsidiary.) What would you like me to do with the list? --Vasha 10:28, 24 August 2017 (EDT)

I guess the easiest way to handle these ISBNs would be to put them in a text file and send it to me as an attachment. My e-mail address is on my user page. Ahasuerus 14:01, 24 August 2017 (EDT)
Got it, thanks! Ahasuerus 15:08, 24 August 2017 (EDT)
Done. There were approximately 500 new ISBNs. The vast majority were picture books and "talking animal" books with no (or very little) other speculative content. For some reason a certain cozy mystery series was labeled "SF" and appeared over and over again. I have added the rest to Fixer's internal database and submitted a couple dozen Tor reprints. Thanks again! Ahasuerus 17:34, 24 August 2017 (EDT)

Peaslee Papers

The Peaslee Papers needs to be put back into Fixer's pending list for the moment; Amazon has it with a date of July 15, 2017, but doesn't actually have any copies available; and the publisher doesn't mention its existence at all. However, in an August 5 interview, the editor stated that the book was forthcoming, so presumably it'll show up sooner or later. --Vasha 22:26, 29 August 2017 (EDT)

Done. Thanks for identifying the issue! Ahasuerus 23:02, 29 August 2017 (EDT)

The Lord of the Overrings

Hi ! Since we are now proposed Bokmål and Nynorsk options, would it be possible to have the former spelt Bokmål (instead of Bokmal) in the list ? Or doesn't the system accept overrings in this particular place ? Takk så mye, Linguist 09:57, 30 August 2017 (EDT).

Unfortunately, there are some technical issues with overrings in this area :( Ahasuerus 10:15, 30 August 2017 (EDT)
Så synd :o(( ! Thanks all the same ! Linguist 10:36, 30 August 2017 (EDT).

Cleanup report for De Nederlandse Bibliografie

Can you add a cleanup report for PPN links in the publication notefield? No hurry, but it would remind me to correct these. Thanks, --Willem 06:07, 31 August 2017 (EDT)

Done! Ahasuerus 16:19, 31 August 2017 (EDT)

Audible and ASINs

Check this out. I don't have any bright ideas for what to do about it (if anything). --MartyD 07:32, 19 September 2017 (EDT)

Unfortunately, Amazon's ISBN-ASIN linkage is not all that it could be. In addition, there are occasional discrepancies between country-specific stores, reused ISBNs, etc. No such thing as perfection! Ahasuerus 09:02, 19 September 2017 (EDT)

Merge Enhacements

I submitted two merge enhancements to the SourceForge project:

  • 1095 Alpha Compare Authors on Merge Comparison
  • 1096 Editor Record Dates on Merges

Hopefully they are self evident, but let me know if you have questions. Of the two, the first one is a higher priority in my opinion. It makes it hard to tell if the edit is merging the same names in a different order or a slight variation in one name (like a spelling issue). This means it can be easy to let through a change that impacts a verified pub. Thanks. -- JLaTondre (talk) 18:54, 19 September 2017 (EDT)

1095 looks good. I'll have to think about the best option to tackle 1096 while minimizing behavior inconsistency across different title types. Ahasuerus 19:09, 19 September 2017 (EDT)

Variant problem

I don't understand what happened here. I created a record for "The White Cat", credited to "Mme. d'Aulnoy". I made Mme. d'Aulnoy a pseudonym for Comtesse d'Aulnoy. If I go to the record for d'Aulnoy's La chatte blanche, it shows three English language versions of "The White Cat" by her, under various names. But none of those three variants show up on the summary page for the canonical name! I don't understand why they are missing. Chavey 01:37, 20 September 2017 (EDT)

If Comtesse d'Aulnoy is the canonical name (it appears it is), then the three translated variants are showing up for me. Do you have "All" selected under Translations on your Preferences page? If that's not it, the only thing I can think of is a browser cache issue. ···日本穣 · 投稿 · Talk to Nihonjoe 02:29, 20 September 2017 (EDT)
I also see the three translations on the Summary page. Ahasuerus 08:04, 20 September 2017 (EDT)
Apparently, I had changed my prefs to not show translations, and forgot to change it back. That corrected it. Chavey 09:37, 20 September 2017 (EDT)

ACT

Is this one correct? I added some Russian to the note and added their official website. ···日本穣 · 投稿 · Talk to Nihonjoe 01:48, 27 September 2017 (EDT)

Actually, we already have it on file as "АСТ". The names look identical, but the one that you linked used Latin characters while the other one uses Cyrillic characters. I have merged the records, so we should be in better shape :) Ahasuerus 09:25, 27 September 2017 (EDT)
It was listed on this cleanup report, so I was trying to get it off. Thanks! ···日本穣 · 投稿 · Talk to Nihonjoe 12:38, 27 September 2017 (EDT)

Solaris

Cover artist of this is Chris Yates, his initials C.Y. can be seen at the bottom of the back cover. Horzel 06:38, 27 September 2017 (EDT)

Updated, thanks! Ahasuerus 13:38, 27 September 2017 (EDT)

Babel-17

The cover artist of Babel-17 is Vicente Segrelles, see his site, image CF_289. Horzel 06:45, 27 September 2017 (EDT)

Looks good, thanks! Ahasuerus 13:40, 27 September 2017 (EDT)

Different editions of ebooks

Hello, as you probably know, I'm usually the one that exploits the "potential duplicate ebooks" cleanup report. After the first big batch, I'm now faced with new duplicates that only differ in price (and perhaps sometimes in the cover shown by amazon as they're unPVed). As they have strictly identical characteristics (mostly pub date, ISBN and publisher, but it's the logic of the thing) but a different "extraction" date (e.g. 2013 vs. 2017) I suppose that's the result of a new price given either by amazon or the publisher. IMHO such changes (price and/or cover) in the case of bunch of electrons doesn't create a new publication and so I delete the older record. What are your (and other interested parties that usually read this page) thoughts on the matter? Hauck 03:56, 28 September 2017 (EDT)

Amazon has two price fields which publishers can change: "Digital List Price" and "Kindle Price". Publishers can update them post-publication (e.g. this discussion), so changes to their values do not necessarily indicate a new printing. If the data is otherwise identical, I would keep the original record and add a note about the price change. It provides us with an audit trail in case we later find out that the changed price was associated with a new printing. Ahasuerus 10:44, 28 September 2017 (EDT)
I feel the same about them. Amazon is also sloppy on supplying the corresponding covers. I have the impression that they also use pre-publication images that in fact were never published (and even if they change the images for a 'new edition', we can't tell them apart unless they are PVed). Stonecreek 05:19, 28 September 2017 (EDT)
Amazon generally uses publisher-provided cover scans. Here is how it usually works with major publishers:
  • The publisher creates a "bare bones" Amazon record 6-12 months before the projected publication date. These records typically do not have cover scans associated with them.
  • 3-6 months prior to publication the publisher updates the record with a pre-publication cover scan. If the publisher thinks that the cover may change, a band with the words "Not Final" is added to the scan.
  • The publisher reviews and updates the publication record prior to publication. If the cover scan has changed, it gets replaced.
Or at least that's how it's supposed to work in theory. In reality some publishers skip some steps, hence the occasional discrepancies between what's listed by Amazon and what's in the book.
Once you become familiar with how each publisher operates, you can usually identify publisher-specific patterns. For example, I don't trust Yen On's pre-publication scans nearly as much as I trust Tor's. Ahasuerus 10:55, 28 September 2017 (EDT)
One only wonders how amazon gets to something like this, when the actual cover scan is that: Both are for the same 1983 publication, but note the differences (absence of 'Lübbes Auswahlband' and '83').
Even better is this example, where the actual image seems to have been actively censored. Stonecreek 12:01, 28 September 2017 (EDT)
Re: the second example, it looks like the image used by Amazon may have been an early version of the cover that was eventually used by the publisher -- note that the text at the bottom of the cover is different as well. If so, then it's another example of the publisher failing to update the Amazon record after finalizing the cover art. Ahasuerus 12:48, 28 September 2017 (EDT)
Yes, that's possible. But it shows that covers taken from amazon (or any other similar source) can't be taken 100% to identify a publication, doesn't it? Stonecreek 16:20, 28 September 2017 (EDT)
Indeed. Unfortunately, as I discovered earlier this month, Google Books is even worse when it comes to forthcoming books :-( Ahasuerus 16:35, 28 September 2017 (EDT)
As odd as it seems to trust Goodreads, I think they are a better source for covers. They encourage users to add alternate covers, and don't delete images they have on file. So they will have all variants, including preliminary, scans of final, and subsequent publications. The only thing is that the quality of the information to tell you exactly which editions/printings these belong to is variable, but sometimes people do add careful librarian notes. Vasha 16:44, 28 September 2017 (EDT) Vasha 16:41, 28 September 2017 (EDT).
I find that Goodreads has its uses. For example, there was a time when Amazon refused to delete out of print and superseded editions, but their policy has apparently changed over the last few years. These days if I am trying to figure out if Amazon has "overlaid" the most recent edition on top of an older one, more likely than not Goodreads has the answer. Ahasuerus 17:00, 28 September 2017 (EDT)

Error in display for Japanese Rendevous with Rama serialization

Please see the 1975-07-00 entry on this page. It's listing the magazine issue (S-Fマガジン 1975年07月号, #200) instead of the title of the serialized story (宇宙のランデヴー (Part 1 of 7)). ···日本穣 · 投稿 · Talk to Nihonjoe 18:11, 29 September 2017 (EDT)

I see the title of the first part of the serialization ("宇宙のランデヴー (Part 1 of 7)") in the "Translated Serializations" column (under "Other Titles") and I see "S-Fマガジン 1975年07月号, #200" in the "Publications" section, which seems to be right. How would you expect them to appear? Ahasuerus 18:21, 29 September 2017 (EDT)
Sorry, my bad. I was in the mode of looking for titles of the work, not the magazine. That's what I get for trying to think in two languages at once. ···日本穣 · 投稿 · Talk to Nihonjoe 18:37, 29 September 2017 (EDT)
No worries! Ahasuerus 18:38, 29 September 2017 (EDT)

Long Ago, Far Away

Please take a look at this problem. It refers among others to The Most Thrilling Science Fiction Ever Told, Fall 1968. We have this also as a chapbook. It seems that the title is only a novella, according to Bob's word count. Do you have any advice at hand how to deal with it? Stonecreek 01:31, 3 October 2017 (EDT)

Taking a sharp look it seems that Leinster expanded what really is a novella (Long Ago, Far Away) into a NOVEL (Four from Planet 5). What's your thinking in this? Stonecreek 03:48, 3 October 2017 (EDT)

Done! Ahasuerus 11:53, 3 October 2017 (EDT)

Old talk at Community Portal

Concerning ISFDB:Community Portal#OCLC URLs in Notes, initiated July, now section 4 on that page.

My submitted PubUpdate 3592075 includes one OCLC ID correction that literally contradicts what you noted in July, and some improvement in layout and typography that clarifies the Notes but makes the July discussion much more difficult to follow.

I leave it to you whether any notice should be inserted in that old talk section, which I would ordinarily insert myself, because

  • the Note as revised retains the main feature discussed, simply that one line begins "OCLC ID" and the next line reports from the linked WorldCat record
  • the point may be obsolete
  • I suppose that material from section 2, July and later, will soon be archived, and modification of section 4 will delay its move to the archive.

--Pwendt|talk 14:49, 11 October 2017 (EDT)

Thanks for letting me know! Due to the problems discussed in July, I have pretty much abandoned the automatic conversion approach. I will update that discussion to indicate that the example is no longer valid. Ahasuerus 15:18, 11 October 2017 (EDT)

Publication delay for queue

The publication of Shining in the Dark has been delayed and the publisher is now saying "early 2018" -- please put this record back into the queue. Thanks. --Vasha 18:38, 12 October 2017 (EDT)

Will do. Thanks! Ahasuerus 18:42, 12 October 2017 (EDT)
P.S. The ISBN is 978-1587675928. Ahasuerus 00:07, 21 October 2017 (EDT)

And another: Frontier Worlds has not yet been published, although the editor still says it is in progress. --Vasha 19:33, 20 October 2017 (EDT)

Sounds good! For now I'll delete the publication record and the stories. Here is the deleted data for future reference:
 Publication: Frontier Worlds: Twelve Stunning Tales Chronicling the Future History of the Human
 Race Publication Record # 616596 [Edit]
 Editors: Scott Harrison
 Date: 2017-06-01
 ISBN: 978-1-911390-01-5 [1-911390-01-5]
 Publisher: Snowbooks
 Price: £9.99
 Pages: 540
 Binding: tp
 Type: ANTHOLOGY
 Container Title: Frontier Worlds: Twelve Stunning Tales Chronicling the Future History of the Human Race • anthology by Scott Harrison
 Notes: Data from Amazon UK as of 2017-05-02. Contents from editor's website.
   Zarla's World • short fiction by Eric Brown
   My Last Death • short fiction by Jacqueline Rayner
   Weak Gods of Mars • short fiction by Ken MacLeod
   Endangered Species • short fiction by Scott Harrison
   Last Born • short fiction by Tanith Lee
   Durance Vile • short fiction by Michael Cobley
   Rodeo Day • short fiction by Philip Palmer
   The Eternity Wing • short fiction by Sadie Miller
   Hostile Takeover • short fiction by Gav Thorpe
   Hidden Depths • short fiction by Justin Richards
   The Expert System's Brother • short fiction by Adrian Tchaikovsky
   In the Speed of Their Wings Keep Pace • short fiction by Storm Constantine
Ahasuerus 00:02, 21 October 2017 (EDT)

ISBN already on file Bug

There seems to be a bug with the "ISBN already on file" flag on the moderator screen when it comes to catalog records. #PP10, #PP12, & #PP13 all gave me the warning, but when I clicked on the link, no results were found. Manually search found records starting with those numbers, but no exact match. #PP08, #PP09, & #PP11, processed at the same time, did not give me the warning. -- JLaTondre (talk) 17:02, 26 October 2017 (EDT)

Good catch, thanks. The "duplicate ISBN" logic was looking for ISBNs/Catalog IDs which start with the entered string. It worked OK for ISBNs, which are unique, but not for Catalog IDs. I have changed the logic to look for exact matches. Ahasuerus 18:41, 26 October 2017 (EDT)

Fixer Ebook ISBNs

Fixer doesn't seem to be remembering ISBNs it has already submitted for ebooks. I'm holding multiple edits that I've already rejected (example) - a few twice. In each of these cases, when I look at the Amazon Look Inside for the Kindle version, there is no ISBN shown. The ISBN is only on the print book so it looks like an Amazon database issue. Once Fixer submits an ISBN, if it is rejected, it shouldn't try again. -- JLaTondre (talk) 11:46, 28 October 2017 (EDT)

Let me check... Yes, you are right, there was a flaw in Fixer's ISBN/ASIN reconciliation logic. I have corrected the bug and marked these ISBN as "rejected" in Fixer's internal database. Please go ahead and reject the submissions. Thanks for catching the critter! Ahasuerus 14:05, 28 October 2017 (EDT)

My Recent Edits

In the left-margin menu it's "My Recent Edits" but it seems to be complete. Do we have any tools that permit its filter or sort? For instance I would select my own edits of type SeriesUpdate for the purpose of finding all my contributions to magazine top pages. --Pwendt|talk 17:57, 31 October 2017 (EDT)

There is nothing like that at this time, but we have FR 927, "Add the ability to search submissions". We could implement it as "Advanced Submission Search" similar to Advanced Title/Author/Publication Search. The selection criteria could be something like "Submitter" and "Submission Type". Ahasuerus 18:49, 31 October 2017 (EDT)

Alternatively, is it possible to get a .csv file of my edits, or another format that can be manipulated by text editor? --Pwendt|talk 17:57, 31 October 2017 (EDT)

Submission history is excluded from the ISFDB Downloads. It's a huge table, with millions of submission records, so it would make the public backup files much larger than they are now. Also, its structure, unlike the structure of regular ISFDB tables, is not for the faint of heart :-\ Creating a CSV file based on its data would not be a trivial proposition. Ahasuerus 18:49, 31 October 2017 (EDT)

Original publications needed for some Russian stories

Hi, do you know what the originals would be for "The Heavenly Christmas Tree" by Dostoevsky and "God" by Zamyatin? --Vasha 20:41, 2 November 2017 (EDT)

Done! Ahasuerus 22:09, 2 November 2017 (EDT)
Thanks --Vasha 22:25, 2 November 2017 (EDT)

Fuzzy Bones

Is the ISBN for [this] record correct? A new editor has entered a printing with an earlier date, one number higher in the catalog progression that Ace used. Ace did many weird things but I've never seen them revert to an earlier number for a later printing. Now he/she wants to change that record to the same date as the one you verified but with a different catalog # [I think the edit was meant to clone]. At the moment it's on hold. Thanks! --~ Bill, Bluesman 12:55, 10 November 2017 (EST)

Let me check... My copy says "Third printing / January 1983". The ISBN as printed on the front cover (and on the back cover) is 0-441-26182-5, not 0-441-26181-7 as was stated in the publication record. Either I was careless when I cloned the pub or someone changed it after I had verified it. Since I am the only verifier, I have corrected the ISBN. Thanks for catching the error! Ahasuerus 17:59, 10 November 2017 (EST)
Excellent! That will make this easier to straighten out. Thanks for checking. --~ Bill, Bluesman 18:29, 10 November 2017 (EST)

Fixer PubUpdate Submissions

There are a number of Fixer PubUpdate submissions in the queue that are showing "Error: This submission is no longer valid. Pub record not found. Please use Hard Reject to reject it.". I notice none of them have a pub title. -- JLaTondre (talk) 11:43, 12 November 2017 (EST)

Let me take a look... Ahasuerus 11:45, 12 November 2017 (EST)
Also, I put a non-error submission on hold. It's changing the ISBN of a 2007 tp. The moderator note is the same that is used for adding an ISBN to an ebook. However, it's editing a tp and the ISBN being added seems to be for a completing different book. -- JLaTondre (talk) 11:48, 12 November 2017 (EST)
OK, I have found the bug. The last round of Fixer enhancements confused publication IDs and internal identifier IDs in the ISBN submission logic. Oops! I'll fix the software, then enter the new ISBNs manually and hard reject the bad submissions. Sorry about that! Ahasuerus 12:15, 12 November 2017 (EST)
All done! Ahasuerus 15:38, 12 November 2017 (EST)

Title Type Searches

With the new pull down for Title Type search, COVERART, INTERVIEW, and SERIAL are missing. Thanks. -- JLaTondre (talk) 14:28, 12 November 2017 (EST)

It looks like the missing title types are COVERART, REVIEW and INTERVIEW. The new code leverages the list of title types which we allow in the drop-downs in Edit Publication and I forgot that it excludes these types. Thanks for reporting the problem! Ahasuerus 14:37, 12 November 2017 (EST)
Sorry, meant REVIEW. -- JLaTondre (talk) 14:39, 12 November 2017 (EST)
No worries, it should be fixed now. Ahasuerus 19:49, 12 November 2017 (EST)
Thanks. -- JLaTondre (talk) 20:57, 12 November 2017 (EST)

Ebook LCCN

Would you mind checking out this conversation? Appears to be an issue with ebook LCCNs. Thanks. -- JLaTondre (talk) 20:58, 12 November 2017 (EST)

Error message that could be improved

I tried to submit a new magazine when I had erroneously put the type of the editorial, titled "Words from the Editor-in-Chief (Apex Magazine, November 2017)", as EDITOR. The error message was "For new/added publications the reference title should not be entered in the Content section. It will be added automatically at submission creation time." This completely baffled me and it took me a long time to figure out what I had done wrong. I mean, I didn't enter the reference title in the content section! Maybe what you want to say is "the title type EDITOR should not be entered in the content section"? --Vasha 22:06, 16 November 2017 (EST)

That's a good point. The reason why this error message sounds so vague is that it tries to account for a number of different scenarios. For example, the same error message is displayed when an editor tries to enter a COLLECTION title in the Regular Titles section of a New Collection submission.
Based on your suggestion, I have changed the software to display the offending title type. Here is what it looks like for magazines and fanzines:
  • When creating a new MAGAZINE publication, an EDITOR title should not be entered in the Regular Titles subsection of the Content section. It will be added automatically at submission creation time.
and for omnibuses:
  • When creating a new OMNIBUS publication, an OMNIBUS title should not be entered in the Regular Titles subsection of the Content section. It will be added automatically at submission creation time.
Hopefully, this will clarify things. Thanks for reporting the issue! Ahasuerus 10:10, 17 November 2017 (EST)

Two very minor issues in the moderator backend

I just poked around the moderator backend a bit to make myself comfortable with it and found two very minor issues which could be improved:

  • Help:Screen:Moderator says that "Your own submissions are blue". However, the blue (current value #D0D0FF) is almost grey and is barely recognizable on the gray table cell background. What about changing it to a real blue, with white as the font color (because otherwise it'd be blue link on blue background), like this:
td.submissionown {
   background-color: #3333ff;
}

td.submissionown a {
   color: #ffffff;
}
  • The "Web Page" field in a submission doesn't show the submitted URLs as links and I have to copy/paste the URL to check the website. Having a real link on the submission page would be great. The link could use the attribute 'target="_blank"' so that people not familiar with the CTRL-click browser behaviour would stay on the submission page and get the link opened in a new tab/windows (though personally I don't like that very much and rather decide myself if I want to open a page in a new tab or not by using CTRL-click).

Jens Hitspacebar 18:27, 17 November 2017 (EST)

Thanks for the feedback! I'll gladly defer to the consensus opinion re: colors since I am hopelessly colorblind :-) Re: hyperlinking Web pages, I think it's a great idea -- FR 1108 has been created. Thanks again! Ahasuerus 19:00, 17 November 2017 (EST)
I have tried #3333ff on the development server and the resulting color is very blue. So much so that even I can see it: Test. Is this what you had in mind? Ahasuerus 19:06, 17 November 2017 (EST)
Yes, exactly. Like the other colours there, which are very yellow and very green :) Jens Hitspacebar 19:10, 17 November 2017 (EST)
I like this idea. ···日本穣 · 投稿 · Talk to Nihonjoe 19:48, 17 November 2017 (EST)
OK, the requested changes have been made. Let's see what the rest of our color-enabled moderators think of them :) Ahasuerus 20:37, 17 November 2017 (EST)
Thanks a lot, looks good. Jens Hitspacebar 06:06, 18 November 2017 (EST)
My pleasure! Ahasuerus 10:36, 18 November 2017 (EST)

(unindent)

One more clickable link would be nice to have: when the submission contains an Image URL, the submission page currently shows the image itself and its URL. If the image is hosted here it'd be great if there'd also be a clickable link to the image's wiki page at http://www.isfdb.org/wiki/index.php/Image:FILENAME (for example to http://www.isfdb.org/wiki/index.php/Image:FZWPLNTNGM2009.jpg). Jens Hitspacebar 11:02, 18 November 2017 (EST)

An interesting thought. Let me see what I can do... Ahasuerus 11:47, 18 November 2017 (EST)
Done! Ahasuerus 18:21, 18 November 2017 (EST)
Very nice, thanks a lot. Jens Hitspacebar 03:10, 19 November 2017 (EST)

ISBN-13 warning

One of the tiny traps I fell for myself several times is the wrong usage of ISBN-13 (as of 2007) versus ISBN-10 (until 2006) in submissions. Prohibiting ISBN-13 when the year is < 2007 is probably to restrictive, but would it be possible to add a yellow warning in the submission view if the wrong length was used for an ISBN? That way, new editors would learn the difference (if they read the yellow warnings after submitting the record, of course), and as a moderator it's not that likely to overlook it. Jens Hitspacebar 13:35, 19 November 2017 (EST)

We have two warning messages at this time:
  • 13-digit ISBN for a pre-2005 publication
  • 10-digit ISBN for a post-2007 publication
Would you suggest changing the first one to "13-digit ISBN for a pre-2007 publication"? Ahasuerus 14:28, 19 November 2017 (EST)
Oh, we have that already? I can't remember having ever seen it (or, ahem, I didn't notice it...) I just re-read the Template:PublicationFields:ISBN and saw, contrary to what I was thinking, that there's no strict separation between pre-2007 and post-2007 ISBNs ("The ISFDB software supports both formats, so if two forms of ISBN are present, you can enter either one."). Conclusion: no change needed! :) Jens Hitspacebar 14:42, 19 November 2017 (EST)
Always a good outcome! :) Ahasuerus 14:43, 19 November 2017 (EST)

Connor Cochran

I added a note to Cochran's bibliography page, regarding his claimed co-authorship of several of Peter S. Beagle's works. I think I've worded it fairly neutrally, but just to be sure, I'd appreciate it if you took a look at it as to whether this note is appropriate. Chavey 07:38, 20 November 2017 (EST)

Looks reasonably neutral to me. I have prettified it a bit, tweaking sentence structure and changing "biography" to "bibliography". Ahasuerus 10:44, 20 November 2017 (EST)
Thanks. (And that was a silly mistake on my part.) Chavey 23:10, 20 November 2017 (EST)

display of the "?"

I really like the new, smaller question mark. But could you move it a smidge further away from the word it follows? On this page, it is overlapping with the italicized Bengali titles; and I think just a little more room would look better for other alphabets too. --Vasha 14:24, 26 November 2017 (EST)

That's a good catch. I have moved the question mark to the right by one pixel, which seems to help a bit. I could move it even farther to the right which would also help Japanese authors like this one. However, it wouldn't look good when displayed next to Cyrillic titles and it would look really odd when displayed next to our format codes. Hm...
We can't change the position of the mouseover question mark based on the language, in part because languages can use multiple alphabets/scripts. However, we can position it differently based on the following conditions:
  • If the text is not a hyperlink, display the question mark right next to the text
  • If the text is a hyperlink:
    • If the text is italicized, display the question mark 2 pixels to the right
    • If the text is not italicized, display the question mark 1 pixel to the right
Let me see what I can do... Ahasuerus 17:36, 26 November 2017 (EST)
Unfortunately, it turns out that we can't adjust the positioning based on whether the text is italicized :-( I have tweaked it as much as I could, but there is still a pronounced difference between Bengali titles and Cyrillic titles: not enough white space in the first case and too much white space in the second case. I am not sure if there is much more that we could do :-\ Ahasuerus 18:52, 26 November 2017 (EST)
Any changes since last week on those besides moving them? Because this morning they do not work in my browser at all (last version FF on Windows). Annie 12:00, 27 November 2017 (EST)
Could you please provide an example? Does this page look OK? Ahasuerus 12:33, 27 November 2017 (EST)
I have no idea what was going on this morning - none of the mouse overs were showing anything more than "?" in the author pages popups. Seems to be working now. If it start happening again, will try to investigate. Sorry for bugging you. Annie 12:39, 27 November 2017 (EST)
No worries! Are you, by chance, using Firefox Quantum (as opposed to Firefox ESR)? Ahasuerus 12:42, 27 November 2017 (EST)
Yeah - it is Quantum 57.0. Annie 12:46, 27 November 2017 (EST)
I see. I wonder if they may be still tweaking things. Ahasuerus 12:49, 27 November 2017 (EST)

If you could identify the Unicode subrange used for the field, you could specify an additional class on the hint span that could be used to provide script-level styling with some CSS.

<span class="hint bengali" title="Ravīndranātha Thākura">রবীন্দ্রনাথ ঠাকুর<sup class="mouseover">?</sup></span>
	
span.bengali sup {
    display: inline-block;
    padding-left: 2px;
}

It seems like there's enough context also to be able to distinguish between italic and non-italic uses of the question mark as well, something like:

span.hint sup {
    display: inline-block;
    padding-left: 1px;
}

i span.hint sup {
    display: inline-block;
    padding-left: 2px;
}

Albinoflea 16:20, 27 November 2017 (EST)

Good points, thanks.
Re: Unicode subranges, Python 2.5 includes "unicodedata". It's not enough to do the trick on its own, but there is an extension, unicodedata2, which looks like it should work. Unfortunately, we still use HTML encoding to store Unicode characters, so we'd need to find the last "true" character in each string and convert it to Unicode first. Python 2.5 includes htmlentitydefs, which once again isn't enough to do the trick, but there is yet another homegrown module which seems to get us what we need. Overall it's probably doable, but may take some time to implement. We'll also need to consider the performance implications for long pages like Silverberg's biblio page.
Re: italicizing, the information is certainly available within the generated HTML code, but the way the current version of our software builds HTML pages the data is not accessible at the right point. I'd need to refactor the code first. Ahasuerus 10:20, 28 November 2017 (EST)

Unbound

Would you let Fixer know that "Unbound Digital" as a publisher should be converted to "Unbound"? Amazon lists their ebooks that way, but the copyright pages all show just Unbound. Thanks. -- JLaTondre (talk) 09:58, 10 December 2017 (EST)

Done! Ahasuerus 11:03, 10 December 2017 (EST)

SFWA Bulletin Index

SFWA is just about to complete a full index of fifty+ years of the SFWA Bulletin. It is now in a spreadsheet based on the Contento (Fictionmags) format. We would like to add it it to the isfdb. Is there a way to do this easily?

Michael Capobianco

At one point we looked into the issue of importing Fictionmags-compatible spreadsheets into the ISFDB database. It turned out to be a huge amount of work and the effort never got off the ground.
That said, we already have the SFWA Bulletin partially indexed using good old manual data entry. Once the master spreadsheet has been finalized, we can ask for volunteer editors who would be willing to enter the data into the ISFDB database. More data is always good :-) Ahasuerus 14:44, 11 December 2017 (EST)

Titleless Pub

Check out this pub. Somehow the title was replaced with "<br>" which broke all links to it. Had to look it up in last database dump to find the pub_id. Unlikely case, but thought I'd point it (instead of just fixing) in case you wanted to put in any error handling. Thanks. -- JLaTondre (talk) 17:54, 15 December 2017 (EST)

I'll try to recreate the problem on the development server. I expect that it's part of a much larger issue, i.e. that we allow raw HTML in most fields (there is a fix in the works), but I'll need to make sure. Thanks for letting me know. Ahasuerus 18:17, 15 December 2017 (EST)

Potential Duplicate E-book Publications

Has the 'Potential Duplicate E-book Publications' report been updated for the new catalog field? It is reporting ebooks as duplicate that do not have an ISBN, but have different catalog numbers. For example 292636 and 292637. Thanks. -- JLaTondre (talk) 09:18, 16 December 2017 (EST)

<click, click, click> It looks like this report is working as intended. It searches for e-books which were published at roughly the same time and have the same ISBN or no ISBN.
In the 9 cases that the report found this morning, one or both of the pubs have catalog IDs, but not ISBN. In the past, catalog IDs were entered in the ISBN field, so the report ignored these 9 pubs because, as far as it could tell, they had different "ISBNs". Once all catalog IDs were moved to a different field, these 9 pairs became eligible for inclusion since at least one of the pubs had no ISBN. Does this make sense? Ahasuerus 12:35, 16 December 2017 (EST)
I understand why they are being reported. I don't understand why the report wouldn't be updated to handle the new catalog id field. It's now generating false positives that don't need to be there. If two ebooks both have no ISBN, but have different catalog numbers, why report them? -- JLaTondre (talk) 13:02, 16 December 2017 (EST)
Oh, I see. That's a good point, I didn't think of that. Let me see what I can do... Ahasuerus 13:24, 16 December 2017 (EST)
Done! Ahasuerus 15:59, 16 December 2017 (EST)

Advanced Title Search Series Does Not Contain

Unless I'm missing something, it looks like series does not contain is not working as expected in advanced title searches:

  • Title search with Title starts with "Foreshadowings" and Author's Name is exactly "Scott H. Urban": 9 results
  • Title search with Title starts with "Foreshadowings", Author's Name is exactly "Scott H. Urban", and Series contains "Frisson": 7 results
  • Title search with Title starts with "Foreshadowings", Author's Name is exactly "Scott H. Urban", and Series does not contain "Frisson": 0 results
  • Title search with Title starts with "Foreshadowings", Author's Name is exactly "Scott H. Urban", and Series does not contain "anything": 7 results

So it seems "does not contain" only works if the title has a series assigned. I was expecting it would also return those without a series? -- JLaTondre (talk) 14:07, 16 December 2017 (EST)

It looks like the underlying problem has to do with the fact that series names, unlike title-specific fields, are stored in a separate table. I'll have to do more digging to see how easy it will be to fix the bug. Thanks for identifying the issue! Ahasuerus 17:53, 16 December 2017 (EST)
It turns out that Advanced Publication Search has the same problem with publication series. For example, a search on "Author's Name is exactly Poul Anderson", "Title contains !" and "Publication Series is not exactly Ace Double finds only 2 publication records. There are other eligible publication records like the Dennis Dobson edition of Let the Spacemen Beware!, but they are ignored because they are not in a publication series. This is clearly a flaw in the logic which builds the generic Advanced Search SQL query. I will go ahead and create a bug report. Thanks again! Ahasuerus 13:18, 23 December 2017 (EST)
The SQL bug has been identified. I think I know how to fix it, but I need to finish the monthly download first. Ahasuerus 10:25, 25 December 2017 (EST)