Tuesday, October 19, 2021

Oxford, England. MURDER CAPITAL OF THE WORLD!


S. wants to go to England and has been talking about it forever. But for the last year or so I've been watching this cop-show documentary called Inspector Morse on PBS. It documents the work of the dedicated police detectives on the Oxford, England Murder Squad. This series has been a real eye-opener; these guys do difficult and dangerous work. Hardly an episode goes by without at least three or four people being murdered

 

'Holy cow!' I said to S. 'If you have three people getting murdered in Oxford every week .. let's see, that's 156 people a year. Population of Oxford (I did a quick Google) is about 150,000 people .. Goodness! That makes a murder rate of 104 per 100,000 people!'

 

I did some more googling. 'That's almost three times the rate of South Africa which is supposed to have the highest murder rate in the world.' I was stunned.

'I don't think...' started S.

'Those British must have a lot of suppressed rage.' I did even more Googling but I couldn't find any independent murder statistics for Oxford at all!

'It's a scandal how the government suppresses this information,' I said.

'But, Bob .. ' she began to say.

'No, it's just too dangerous to go to England! In good conscience I can't expose you to that. We have to go somewhere a lot safer, like South Africa.'

 

I redoubled my research into the Inspector Morse documentaries and I picked up these helpful tips for those of you who want to grab Death by the whiskers and actually go to England.

 

1. Don't get in a car with strangers particularly if the documentary camera man can't show you who's driving.  

 

2. For god's sake, every time you drive, CHECK YOUR BRAKE FLUID!

 

3. If you're rich in Oxford then immediately fire your entire staff; cooks, drivers, maids, the works.  It's a fact that at least one of your staff is the rightful inheritor to your estate and is just waiting for a chance to bump you off.

 

4. Stay away from Professors (apparently there's a large school in Oxford); one of them is likely to be your cousin and is just waiting to get rid of you in order to inherit your money.

 

5. Avoid fox-hunts like the plague.  In fact, anyone on a horse is likely to be bad news.  

 

6. Particularly be on the look-out for jovial British ex-colonels over 50.  They're poison, sometimes literally.

 

7. Stay away from people with funny looking weapons.  These appear to be the murder weapons of choice in Oxford.  Inspector Morse has documented grisly cases of murders with cross-bows, halberds, bows and arrows, and garrottes.

 

8. And remember, if you're approached by a bum, get away from him immediately.  He's actually your dear old dad whom you've never seen but your Mom had an affair with him in “the '60's” and he's going to screw up your claim to the estate.  In fact, avoid anyone connected to “the '60's”.

 

9. Most importantly, no taxidermists, ever.

 

S. and I finally decided on east Los Angeles for our holiday get-away this year.

 

'I love you, Bob.  You're always looking out for me', said S. giving me a little nuzzle.

 

'Someone has to, doll', I said. 

Saturday, October 9, 2021

Short Draft Ancient MakerSpaces Talk

 [Page 1 - Title Page]








The Mycenaean Atlas Project is a relational database dedicated to storing accurate locations of Late Helladic find spots in continental Greece and the Aegean.   It has taken six years to reach this point; the effort is entirely self-funded.  


[Splash Page Slide]

Shortly after beginning the DB work I began to develop the software to put the DB online.  This is the site Helladic.info; there is open access to this site - anyone may use it.  It currently hosts nearly 4500 site pages as well as 6500+ sites - not Bronze Age - that I call Features.  These include things such as towns, bridges, churches, etc.

The use of the site is straight-forward.  The easiest way is to enter a site name such as 'Mycenae' or 'Tiryns' in the search box and then click on the returned link.  This will bring you to the correct site page.  The site page itself has a search box so that you can, if you wish, continue with another search or you may return to the control page.

[Goals]























The tool allows quick reference to specific individual sites.  It should allow researchers at whatever professional level to quickly investigate the several sites which characterize Bronze Age Greece.  These goals include allowing users to have accurate coordinates for sites, to generate reports, and to use the DB in follow-on uses.

It features an intuitive user-interface that allows for easy exploration of the BA landscape.  There is a dedicated page for each of the 4400+ sites.  The DB is accessible in several ways which include a nearly-completed API.  Those researchers who would like to obtain the full DB should contact me through e-mail.


[Slide of control page]






This is the central (or 'control') page for the site.  It provides a variety of search methods for zeroing in on the site(s) you would like to investigate.  Besides a General Search that allows you to search for any string there are other search types
  • Search by region (2 ways)
  • Search by combination of region, ceramic horizon, and type
  • Search by well-known or important sites
  • Search by gazetteer contribution ('literature')
  • Search by habitation size (sq. m.)
  • Search by elevation range (m.)
[Slide of a single site page]












\










Each site has its own dedicated page with
  • Locational information
  • Accuracy indicator
  • Elevation
  • Type of site and finds
  • Ceramic horizons associated to the site





















On the right-hand side of the individual site page we have the
  • Location Notes
  • Site Bibliography


[Slide of four individual site tools]





Tools for individual sites
  • Intervisibility: This shows all the sites in the DB which are intervisible with your chosen site.  It looks out 6.5 km.
  • Aspect and Slope: This analyzes the slopes at the site and tries to show which direction the site faces
  • Nearest neighbors:   This draws a map which shows your site's nearest neighbors along with the direction to those sites.
  • Three dimensional modelling of the site environment:  There are 3D terrain models for most of the sites; these were prepared for me by Xavier Fischer of elevationapi.com.






















[Slide of four types of analysis for groups of sites]











































Groups of Sites

In addition to single sites you can also analyze sites as groups, e.g. all peak sanctuaries or all sites in Messenia.  You define these groups from the Control page.  If you want to group sites from certain regions then you can choose those regions from the handy thumbnail maps on the Control page.  Once your group is created you can generate group reports:
  • Group aspect: This report looks at the slopes and aspects of all sites in the selected group
  • Gazetteer: This deceptively simple list of all sites in the group turns out to be among the most useful reports.  The gazetteer has info on each site along with a link to each site in the group.
  • Elevation: The elevation report lists the elevation for each site along with a histogram and elevation graph for the group.  It also provides other statistics.
  • Specialized Bibliography: After you've selected a group the Bibliography Report will generate a list of sources that were used to create that group.
  • Chronology: This generates a chart of all the ceramic horizons that are characteristic of the group you selected.  


[Slide of data sources: BIBLIO]

I began this Atlas with a copy of Simpson's Mycenaean Greece from 1981.  At the start I supposed that I had no need for a bibliography table in the DB.  Simpson would be enough.  By the time I reached my third site I had seen the error of my ways; at the present time the bibliography contains 1700+ titles and may be seen by anyone using the website.  On the Controls page you may examine the Atlas' coverage of any of the 40 most significant Gazetteers used in creating the DB.  The sites were located in various ways, through gazetteers with good locational information, examination of maps such as Topoguide, Topostext, ... even user contributions such as Wikimapia.  On my blog there are many examples of the techniques I used to find specific sites.  The internet has given wide access to the scholarly literature and many a dissertation was examined for information.   Each site was, if possible, confirmed in several ways before being put in the Atlas.


[SEARCH and HELP SLIDE]
There is also a powerful search facility and an extended help page.























You can search  by any string and especially by the place keys.  The results come back as links to the page on which that string appears.  This includes find spots and periods, author and book names, contents of notes, etc.
























Summary and Future

API or Application Programming Interface.  

1. There are any number of online databases that can be integrated to this software.  When you design software you should take note of other databases that can enhance your own product.  The elevation.api Interface is an example of this but there are other DBs such as hydrography or geology that may add to and enhance your product.

2. One use of this DB may be to use the tables independently in other completely different DBs.  You may for example be developing a DB on Mycenaean weapons.   In order to express the location of these finds you will have access to a kind of Mycenaean Site API that would 'serve' locations to online clients.  The API should be able to serve elevations, lat/lon pairs in several types, alternate names, lists of ceramic horizons, biblio names, etc.

3. It's not clear to me that for a product of this type a simple lat/lon to name pairing will be adequate.  Sites need a story to go along with them.  An approach like Topostext.  

4. In the area of ancient history, ethnography, linguistic studies, etc. (everything having to do with antiquity) it is necessary, like the hard sciences, to 'save the data'.  We cannot stake the future of knowledge representation and programming in this area to 'semantic networks' or 'webs' with their suffocating 'ontologies' or 'controlled vocabularies'.  Such things are advocated by some, not for any reward for the Humanities, but solely for the convenience of the computer.



[Thanks to contributors]






Guide to Posts that concern finding site locations in Greece

I've written a number of posts in the last few years (over two different blogs) that concern the location of various BA sites in Greece and how I found them (or didn't).  Here I present a list of those posts which should make it easier to find them all. 


 Aigialea (Achaea) (C5004, C5005, C5006, C5008)

Amarynthos: Palaiochora (Euboea) (C1223)







Chalandritsa Region of Achaea : (C653, C654, etc.)



Galatos and Stalos, Crete (C5762, C5763)









Lambaina Quarry (Messenia, C131)

Malesina, Hagios Georgios (Locris) (C5169)



Metsiphi on Euboea (C6878, etc.)

Orchomenos (Boeotia) and Lake Copais 

















Tithorea, etc. (C5152)



Valta in Messenia


Friday, October 8, 2021

Promiscuous Lex

 

‘Girl number twenty unable to define a horse!’ said Mr Gradgrind, ...  [1]

Edmond:         There are 72,519 stones in my walls. I've counted them many times.
Abbé Faria:     But have you named them yet? [2]


   So, imagine that there are these two websites, http://www.napoleon-scholar-a.com and http://www.napoleon-scholar-b.org and they both blog about, guess what, Napoleon Bonaparte.  And these are two reputable scholars although A blogs mostly about the period before 1804 and B blogs mostly Imperial period with some overlaps.  Here’s the idea: someone says ‘what a great resource it would be if we could put these two together somehow’.    And when we consider that there’s a large amount of material on the web related to Napoleon it would be great if this could be automated.  That is, from a multiplicity of on-line resources, to create one large indexable or searchable reference on Napoleon![3]  



And not just search; we should be able to create an automated reasoner to which we could ask questions about Napoleon.  Simple questions like ‘When was the Battle of Marengo?’ and more complex questions such as ‘Was Napoleon good for France?’  Automation is the key, but how would that be done?  Well, as a number of computer types have pointed out, all these sources use the very same nouns or referents.  For example they all use such words as: ‘Napoleon’, ‘empire’, ‘Pope’, ‘Josephine’, ‘France’, ‘Marengo’, ‘Austerlitz’, etc., etc.  What we need to do is come up with a formal way of representing all the relevant nouns, enumerate their properties, and relate them to each other.  The sites, after all, are quite different in style and presentation but they are semantically similar.  And that leads us to formulate such quasi-RDF (Resource Description Format) triples as:


‘Napoleon’ ‘has-a’ ‘wife’;
‘Napoleon’ ‘is-a’ ‘general’;
‘Marengo’ ‘is-a’ ‘battle’
‘General’ ‘has-a’ ‘army’;
‘Josephine’ ‘is-a’ ‘wife’;
‘wife’ ‘has-a’ ‘husband’


And if we defined enough of these, a large number to be sure but graduate students have lots of time, we’d create a representational form strong enough to describe Napoleon and all of his works.  Our automated reasoner would search the aforementioned blogs and find each noun and relate it to the relevant triplet in our database and automatically place it in context.  In that way the various sites on Napoleon would be united in a Semantic Web.  We would be able to ask questions about Napoleon or related subjects and not only learn where the answers are but the answers themselves.  And, of course, not just Napoleon but every conceivable subject – a grand semantic web that unites all knowledge (on-line at least) and allows us to ask questions about anything and receive complete and detailed answers along with the degree of the reliability of that answer.   And the important thing is that all of this would be automated.

This kind of work is attributed to Tim Berners-Lee.   But, of course, none of this is new.  Philosophers have been trying to reduce reality to a series of unambiguous predicates since the dawn of time.  The only thing that’s new here is the intended scale and the means; computers have allowed dreamers to envision a totally automated and effortlessly constructed compendium of every conceivable statement about reality.

It is darkly curious, then, that none of this ever seems to succeed.   No matter how many buzz-words are invented, no matter how many convincing papers are written, no matter how many conferences are held, web-sites designed, contributors or gullible foundations (looking at you, NEH) milked for 'start-up' money - none of this ever seems to work.   

So, whenever you DHers hear the words ‘Semantic Web’, ‘RDF’, ‘XML’, or 'linked open data' I solemnly warn you that you are about to be bamboozled into wasting huge amounts of your valuable time.  Please take that to heart.  Just ignore advocates of such schemes; like the religious fanatics that  pass out pamphlets at your door (and religious fanaticism is exactly what drives the concept of the semantic web) they’ll go away if you ignore the door bell.  Remember that you are scholars and RDF/XML semantic web schemes are the death of scholarship.

What’s so wrong about the Semantic Web?

Reality.[4]

The problem is that there is an infinite number of domains of discourse and no Semantic Web can ever hope to unite them.  To see this imagine that we have a third website to be covered by our Napoleonic RDF.  It is called ‘www.napoleon-scholar-c.com’ and it tells the compelling story of how Napoleon came from outer space to wreak havoc in order to pave the way for an alien invasion.  But, foiled by the crafty British, and imprisoned on Saint Helena, his avatar went back into space – there to bide its time on a moon of Saturn where it waits to try again.[5]  And, even though this site uses the same referents as the first two sites, ‘Napoleon’, ‘Marengo’, ‘Josephine’, etc., and even though its propositions can be expressed in the same or similar RDF,  it does not belong to the same domain of discourse.  No attempt to semantically unite these three sites can ever lead to anything except nonsense.[6]  Now, of course, you’ll say that no coo-coo web site like that should be included in our semantic web.  But a human being would have to make that judgment.  To make the judgment, that is, that this web site belongs to a totally different domain of discourse.  So much for the dream of automation.

And, in fact, there's no guarantee that any particular web site is consistent in the domains of discourse that it presents.  That means that even if you choose a website to include in your semantic web scheme that someone knowledgeable still has to go through each statement and test it for reliability (however reliability is defined in your particular semantic web).[6a]


Darker examples could be adduced.  Imagine two web-sites, ‘www.darwin-savior-of-mankind.com’ and ‘www.the-beagle-was-only-a-dog.info’, the first a pro-evolution site and the second vehemently anti-evolution.  They both use the same terms, ‘evolution’, ‘fitness’, ‘selection’, and in, probably, very similar ways.  The same RDF could be formed for both.  But at some point someone is naively going to ask our semantic web about the truth value of evolution and survival of the fittest.  Any semantic web that tries to unite these two domains of discourse will be incoherent on that question.  There is no knowledge schema that covers or can cover these two separate realities.  Again the problem could be solved by a human being culling the web sites covered.  That is, by reading all of them and making a human judgment about which are reliable.  (Another name for this is 'scholarship'.)  Again, the death of automation.

And what about these two: ‘www.abortion-is-murder.com’ and ‘www.celebrating-roe-v-wade.net’?  Or these two: ‘www.gay-is-the-future.org’ and ‘www.true-cause-of-hurricanes-revealed.net’?  Or these two: ‘www.united-nations-benefits.gov’ and ‘www.real-no-shit-black-helicopter-sightings.info’?  Or these two: ‘www.my-guns-my-self.me’ and ‘www.gun-control-failure-scandal.info’?  Or these two: https://www.cdc.gov/coronavirus/2019-ncov/index.html vs. 'www.wake_up_sheeple.us'?

In other words the proposed RDF schemes will fail precisely where we, as human beings, are most concerned to know something reliable.[7]  That is, where our very selves are most involved, RDF and related schemes are powerless.  RDFs through all time have relied on the idea that all knowledge is one; that Truth is One.  I blame Plato for this but that’s just me.  The fact that some of these RDF schemes are ‘ISO-certified’ is just the rotted icing on the absurdist cake. [8]


All knowledge is not reducible to atoms.  And call me a grumpy old man but I have decades of experience in advanced computer science and I've never personally encountered a computer scientist who was educated about anything outside the narrow field of computers (and it is a narrow field).  They are not to be trusted on the issues with which the rest of us are concerned (although I might make an exception for Jaron Lanier).

What divides us as human beings isn’t just a few propositions which, once we learn them, will put us on the track to ‘right thought’.  It is not information that divides us.  This is the classic mistake of computer scientists – and the Holy Grail for every totalitarian.  Au fond, most computer scientists really believe that words are things.  But they aren’t.  We, as human beings, live in our own inherently valuable universes.  Not all of those universes can be harmonized with all the others.  What separates these universes – these selves – are not wrong propositions, or bad-thought, but deeply felt passions, needs, appetites, and loves.  Other human universes cannot be stormed by the Dialectic.  Our connections have to be built up patiently over time.

And no automation can replace scholarship.  By scholarship I mean the several activities of gathering evidence, organizing, patient collation, reflection, judgment and the expression of these activities in the form of essays, books, diagrams and, yes, even in the form of web sites or blogs.  There is no grand slam against reality; no Tower to the Heavens that we can build that will let us storm the citadel of knowledge.   We have to patiently scrape away at the matrix of the Unknown with our small intellects in order to see it more plainly.

Just as we have to work to see each other more plainly.


Endnotes

[1] Hard Times, Charles Dickens

[2] The Count of Monte CristoJay Wolpert, 2002.

[3] Paul Ford suggests exactly this approach for sociobiology.  See Ford [2003].

[4] The best critique I know on this subject is Hubert Dreyfus’ invaluable (it deserved a Pulitzer) What Computers Can’t Do: A Critique of Artificial Reason from 1972 and his new edition, What Computers Still Can’t Do, from 1992.  The budding DHer can also benefit by reading the amusing remarks of Turow (2010) on Tort law.  Turow shines a brilliant light on this very problem of the connection between clearly expressed facts and reasoning about these same facts in various contexts.  The money quotes are :

"Him and his goddamn questions, I thought, his crazy hypos: If battery is a mere offensive touching, 'Is it battery to kiss a woman good night, if she demurely says no?  To push a man off a bridge that's about to collapse?  ...
    I wondered when he would cut it out.  There was no answer to these questions.  There never would be.
    I sat still for a second.  Then I repeated what I'd just thought to myself: There were no answers.  That was the point, the one Zechman - and some of the other professors, less tirelessly - had been trying to make for weeks.  Rules are declared.  But the theoretical dispute is never settled.  If you start out in Torts with a moral system that fixes blame on the deliberately wicked - the guy who wants to run somebody over - what do you do when that running down is only an accident?   How do you parcel out blame when A hopes to hurt B in one way - frighten him by shooting a gun; and ends up injuring him in another freakishly comic manner - clobbered on the head with a falling duck?"   Scott Turow, One-L, pp. 112-113.

and this:

"Was it assault if a midget took a harmless swing at Muhammad Ali?  Was it negligent to refuse to spend $200,000 for safeguards on a dam which could wash away $100,000 worth of property?", p. 62.

Turow's example of the collapsing bridge is a very simple formulation of a famous problem in Law which is described in Leo Katz', Bad Acts and Guilty Minds, Chicago, 1987, p. 210:
"Henri plans a trek through the desert.  Alphonse, intending to kill Henri, puts poison in his canteen.  Gaston also intends to kill Henri  but has no idea what Alphonse has been up to.  He punctures Henri's canteen, and Henri dies of thirst.  What has caused Henri's death?  Was it Alphonse?  How could it be, since Henri never swallowed the poison.  Was it Gaston?  How could it be, since he only deprived Henri of some poisoned water that would have killed him more swiftly even than thirst.  Was it neither then?  But if neither had done anything, Henri would still be alive.  So who killed Henri?"  Katz follows up with a number of real-world examples.

Now if we tried to express these facts in triples form we might have this:

(1) Alphonse - Poison Water - Henri
(2) Gaston - Steal Water - Henri
(3) Steal Water - cause - Thirst
(4) Thirst - cause - Death
(5) Poison Water - cause - Death
(6) Henri - Death - thirst

Now that we have our DB of triples we ask our automated Reasoner 'Who killed Henri?'  It's hard to imagine a Reasoner that wouldn't conclude that Gaston killed Henri with a certitude of 100% and then only because it happens on the 'Henri - Death - Thirst' triple first in the database.  Thus the ambiguity in the situation is elided by the completely unrelated chance ordering of the triples in the DB.   A Greek teacher pointed out to me once that expressing an argument in Greek imposed an 'artificial clarity' on the argument.  So here.  

An automated Reasoner will start out with our question          

<blank> - cause death - Henri

and are asked to  fill in the <blank>.  It looks for a triple about Henri's death and does this:

(6) Henri - Death - Thirst
(4) Thirst - Cause - Death
(3) Steal Water - cause - Thirst
(2) Gaston - Steal Water - Henri
so:
<Gaston> - cause death - Henri

No fuss, no muss.  Ambiguities resolved.  Our automated reasoner sends Gaston off to prison for life as a reward for his saving Henri from a horrible death by poison.

[5] A very mild formulation compared to what we often find on the internet.  Even the practitioners of the Pleiades Linked Open Data initiative have some slight awareness of the difficulties of describing even a single individual in RDF triples.  In Isaksen et al. [2014] we read the following:

"people can be harder to denote, especially where the evidence is fragmentary. Should Aristotle be defined by his place of birth, his association with Athens (of which he was not a citizen), his contributions to philosophy (which?), his tutoring of Alexander the Great, or a combination of these and other ‘facts’?"

I love the shudder quotes around the word 'facts'.  And 'fragmentary'?  Aristotle is one of the best described individuals from antiquity.  If we can't describe Aristotle what will we do with Jesus?, Socrates?, Julius Caesar?  Would anyone want to be the worker who reduces Alexander the Great to RDF triples?  

[6] 'unite' : The real  purpose behind the creation of all these RDF triples is to create a reasoning machine (in our terms a piece of software) that would virtually traverse the Semantic Web answering questions.

[6a] On issues of consistency, non-contradiction, varying authorities, out-of-date data, etc. in the context of the semantic web see Wright [2011] 77-78.

[7]  Facing exactly this problem of scaling up Dreyfus [1972] says (quoting from memory) We don't want to play automated chess.  We want to know how to find our way out of the woods when we're lost.  We want to know which fork to use for the salad when dining at the White House.

[8] ISO is another bad idea from the ’80s whose sell-by date has long passed.


Bibliography

Dreyfus [1972]: Dreyfus, Hubert, What Computers Can't Do: A Critique of Artificial Reason.  Harper and Row.  1972

Dreyfus [1992]: Dreyfus, Hubert. What Computers Still Can't Do.  MIT Press. 1992.

Ford [2003] :   Ford, Paul. 'A Response to Clay Shirky's “The Semantic Web, Syllogism, and Worldview”',  www.ftrain.com/ContraShirky.  November, 2003.   Online here.

Isaksen et al. [2014] :  Isaksen, Leif and Simon Rainer, Elton Barker, Pau de Soto.  'Pelagios and the emerging graph of ancient world data'.  June 2014.  DOI: 10.1145/2615569.2615693.  Online here.

Katz [1987] : Katz, Leo.  Bad Acts and Guilty Minds; Conundrums of the Criminal Law.  University of Chicago Press.  ISBN: 0-226-42592-4.  1987.

Turow [2010]: Turow, Scott.  One-L.  Penguin Books (reprinted 2010).

Wright [2011] : Wight, Holly M., Seeing Triple; Archaeology, Field Drawing and the Semantic Web. Dissertation for the degree of Ph.D. , Department of Archaeology. The University of York, England. September 2011. Online here.

Stous Athropolithous

  (All references to Cnnn or Fnnn can be found in the Mycenaean Atlas Project site at helladic.info) I've been working through the list ...