cthia wrote:Jonathan_S wrote:But even if we say the first case is true I wouldn't give a such a model much credit for predicting Honor's next targets; not when the best they seem to be able to say about it is that it is not "a good predictive model".
I agree. As you said I think the author wanted to infer that. But, I hardly think Linda and Lewis was willing to leave the entire thing up to fortune telling without having consulted an expert system. The expert system was simply unclear or incomplete. I wouldn't be surprised if the predictive model would have come up with the solution had they had more time refining the data and asking it the right questions. They just didn't have the time.
They had enough data. Because the data that their intuition worked on (the tea leaves) existed.
[snip]
I agree that RFC was trying to convey that it was human intuition. But purely intuition? Without access to any of the known data? Intuition works on data, even if subliminally. Intuition isn't "conjured up" out of thin air.
Jonathan_S wrote:"Purely intuition"? Of course not.
That's why I said "it was
in large part human intuition that led them to winnow down the entire Republic to a list of 10 most likely, and 15 more still likely, target systems that Honor might try for.". "in large part" != "purely".
Both you and I used essentially the same qualifier. Your "in large part" and my "for the most part." So we do seem to agree there.
Jonathan_S wrote:However I'm less convinced they had enough data for the predictive model to work with if only they'd had more time. It seems to me that the issue is identifying which system characteristics the Manties were looking at, and then how they were weighted.
Absolutely. And I think they had enough data already.
Jonathan_S wrote:However maybe I'm making a different distinction, or using terminology slightly differently, than you are. I'd view the expert system as the software that attempts to develop a predictive model based on training datasets and other input given to it, and the predictive model as the result of feeding the expert system such data.
We are in agreement on the terminology as well. To be concise, an expert system is really just the generic off the shelf software that can be used to create an expert system. Let me add that a couple of terms are being used interchangeably in today's markets that really are misleading. To be more accurate, an expert system is really just the off the shelf generic software that is used to create a specific kind of "knowledge base." In today's markets they claim that expert systems are the forerunner of a knowledge base. But an expert system doesn't become an expert system until you feed it a specific kind of knowledge base. For example, my sister surprised me over twenty years ago when she told me she uses an expert system in the medical field.
You can train the generic software package to become an expert in any field by feeding it loads of data. That is its knowledge base. It doesn't become an expert until you feed it.
Haven would have had an expert system developed for, what, decades? Centuries? That is a lot of knowledge being fed to it, and constantly being updated on its systems. After all, as I said, I imagine the Hollow Tank has access to the very same knowledge base fueling the Admiralty's decisions about fleet dispersions, pickets, etc. But I imagine the analysts and tea leaf readers access it from different stations to keep the Hollow Tank clear for the Admiralty.
Jonathan_S wrote:Yes the Republic Navy's intelligence analysist would have had expert systems to try to develop predictive models -- but in this case they didn't feel there was enough data to let the expert system come up with a good model.
You are correct. Lewis said that be didn't think they had enough data. I disagreed with that in another post. They did have enough data. Also as I mentioned, I am surprised that he would even suggest that. It is impossible to know if you have enough data until you submit the data to be modeled. Because even though the data that you submit may be sparse, the knowledge base compares and sorts it to a humongous mound of information. I have seen expert systems spit out a conclusive answer from an insanely small input.
Jonathan'S wrote:(But I don't know whether they made the decision by trying and seeing it gave poor results; or whether they knew by past experience how much data would be required and knew they didn't yet have that much data)
As I said upstream, it is impossible to know if you have enough data until you query the expert system. The expert system may have been able to produce an adequate model after just Cutworm I. That was only one operation, but it was comprised of five different targets.
Jonathan_S wrote:There had only been 2 series of raids before they were successful in ambushing Honor at Solon. And put together Cutworm I & II had hit only 9 systems out of the entire Republic. That's such a small data set that even if you'd perfectly identified the key characteristics the Manties were using for target selection there would be multiple different relatively weightings of those characteristics which would all give you that same 9 target set -- but which would result in quite different guesses as to the most likely next target(s).
First off. Nine systems translate into a lot of data invoked by the expert system's knowledge base, that has been compiled on those nine systems. As I said above from my long experience with expert systems, the five targets hit by Cutworm I should have given them a very promising model, if not a good one. Consider that their expert system has a knowledge base going back many decades or centuries. So, just the five systems that were hit in Cutworm I would have triggered a lot of data in the knowledge base.
Jonathan_S wrote:Further, the initial report appears to have predated Cutworm II, since Marquette and Theisman were discussion the success at predicting Des Moines, Fordyce, and Chantilly -- all of which were hit in the 2nd set of raids. So the initial report only had the 5 targeted systems from Cutworm I to use as data points to try to work out where Honor's forces might strike next. From those 5 they worked out 10 primary targets (2 of which were correct) and 15 less likely targets (1 of which was correct).
So I was somewhat wrong before. They had called the model "not good" after placing their bets. But I'd missed that it was their bets for Cutworm II. With the additional data points of the 4 systems hit in that 2nd phase of raids they would have been able to try to refine the predictive model and it might well have been much better (possibly even considered good) before they had to make their (ultimately successful) bet on the target(s) for what turned out to be Cutworm III.
Yes! As I said, I would have been surprised if the expert system didn't yield promising results after just Cutworm I. Cutworm I was five systems. There is an awful lot of data stored about those five systems compiled and collated for centuries. With the additional data on four more systems and still no good model really surprises me.
The expert system's knowledge base would include everything that fueled the tea leaf readers' intuition. Things like ...
"Basically," Marquette said, sitting obediently, "they tried looking at the problem through Manty eyes. They figure the Manties are looking for targets they can anticipate will be fairly lightly defended, but which have enough population and representation to generate a lot of political pressure. They're also hitting systems with a civilian economy which may not be contributing very much to the war effort, but which is large enough to require the federal government to undertake a substantial diversion of emergency assistance when it's destroyed. And it's also pretty clear that they want to impress us with their aggressiveness. That's why they're operating so deep. Well, that and because the deeper they get, the further away from the 'frontline' systems, the less likely we are to have heavy defensive forces in position to intercept them. So that means we should be looking at deep penetration targets, not frontier raids."
Even your very own criteria in a post upstream is really quite good ...
Jonathan_S wrote:Maybe. But I think the most useful thing a computer could do in this situation is be a queryable database of all kinds of pertinent information on systems and their planets.
Letting the annalists think up various parameters or characteristics (location, industry, political clout, defenses, etc.) and get back the resulting lists of matching systems. (Because there's too many systems to have them all in your head)
The problem with using a generic computer is that it will simply provide you with raw data, of which there would have been tons of uncollated, cross referenced and analyzed for the very patterns that you are looking for. You would essentially end up with a lot of data on your desk like in the old days with many eyes looking for that certain information found on two separate pieces of paper that form a clue. An expert system does the filing and collating and grouping together of facts. Even just Cutworm I should have generated tons of information. An expert system will present that information in an efficient form that is easy on the eyes and brain. Like a spreadsheet arranges statistics.
Jonathan_s wrote:"Okay, if we assume 8th fleet doesn't want to hit anything with more than 6 weeks travel out; but has at least X reps in parliament, what systems does that leave? Okay, let's try filtering out the ones with fleet bases as being too tough a target; now what do we have?"
"That looks promising, but lets see if anything significantly better pops out if we push the range out to 8 weeks"
"I think they'll weight naval infrastructure damage more highly, which systems have repair yards or missile depots?"
"No, I think they'll want to take out the most mobile defenders that they safely can; rank the systems by size of their modern defensive task forces"
Very good criteria.
Jonathan_S wrote:That's not going to give you your final answers; but its a quick way to filter out the majority of the chaff based on various criteria. Then you get different analysists all applying their own theories on Honor's targeting criteria and sit down and compare lists with each other.
It didn't give the final answer, but it just as easily could have. At any rate, you wouldn't want to make the job more difficult for the analysts with simple raw, un crossreferenced, uncollated data that hasn't even been checked for patterns in a way that only an expert system can. An expert system is an expert on patterns and they were looking for patterns. The final submission was itself a pattern conceived by the readers. From the same data that should have long been a part of the knowledge base.
Jonathan_S wrote:I doubt an expert system would be of much use because what they're struggling for is gaining intuition on how Honor and the Admiralty are selecting their targets -- so humans thinking about possible alternate criteria seems the most important part. Letting an expert system do that "thinking" for you kind of defeats the purpose -- at least when you don't have enough training data for it to get good. In this case figuring out whether the judgement criteria makes sense, in the absence of sufficient data, is the key thing -- and that's the thing expert systems are worst at; because they can't apply intuition to insufficient data.
But there wasn't insufficient data. That is where we disagree. The expert system should have had the same data set that fueled the tea leaf readers' intuition. The same tea leaves that I posted above that are included in the textev.
Not just sometimes, or most of the time, but all of the time the success of an expert system will always depend upon asking the right questions,
even if you have a complete data set to present it.This reminds me of the many times Geordi La Forge and Commander Data had gotten annoyed while querying the Enterprise's knowledge base on a certain matter. They caught no joy until they asked the right question with the right command. The most memorable case was when Dr. Crusher was querying the data base trying to determine where the rest of the crew of the entire ship had gone. Why was she the only one left on the Enterprise. "Oh shucks! The answer has to be there. Well computer, give me the answer to this. And cross reference it with that." "Bingo!"Jonathan_S wrote:Actually generating your final selection is just a result of finally converging on a collective best guess of how your enemy seems most likely to pick their targets.
An educated best guess implied from inference.
Jonathan_S wrote:I don't think we know how many systems were on the more likely and less likely lists for that; but they obviously included Solon which was one of the 2 systems Honor's forces did hit. (Though why her other target, Lorn, didn't have an ambush of its own set isn't, to the best of my recollection, explained. Could be the analysists missed that one, or rated it too low on their list; or could be the RHN had other reasons for which systems they could afford to set their few concentrated forces at.)
I don't have searchable text, but I do recall that Honor actually had a method to her madness. Part of it was the pressure that would be put on their navy from a political standpoint. There were at least several considerations as I recall. So, Honor's own model should have been a subset of the expert system's knowledge base.