Where this gets frustrating for many people, myself included, is in assessments of justice. Did Trump's campaign collude? Yes! That's not the same thing as saying we have met the evidentiary standard for prosecuting the thing called "criminal conspiracy," by the standards set forth in the U.S. Criminal Code, but the Manafort/Kilimnik meeting, at least, shows "collusion," by any reasonable definition to anyone applying basic reasoning skills. Did Trump interfere improperly in the investigations? Yes! In so many ways, from firing Comey to his many attempts to interfere otherwise. Obstruction of justice, though, requires the technical point of establishing an underlying crime, creating a basic problem that if you successfully obstruct justice, you... um... can't be prosecuted for it! Yeah, that's pretty much as stupid as it sounds. Maybe you get a sense of where I'm going with this post.
So, are Trump and his people guilty in a moral sense? Yes. In a legal sense? Um...
The law is not actually a code of justice. It is a set of rules. It's a game, really, meant to approximate justice. Think of it like this. What you really want is a benevolent king whose, um... "court" dispenses justice based on the intrinsic goodness and wisdom of the king and a capability to parse even the most complex moral dilemmas, but there ain't no such thing, so you write some "code" that approximates the behavior of your benevolent king. You want your "code" to create an automated system that a blind-folded observer would mistake for the benevolent king, thereby passing the moral Turing test. Fun with etymology today.
Your "code" is the law. The... legal "code." [Kick, kick... Hey, horse! Are you still dead?] It's not justice, but a program intended to approximate it. The failures of justice are what reveal that we are observing, not a true benevolent king in his "court," but some shoddy code that can't pass the Turing test. Not a king, but a machine.
Statutory law is an attempt to take arbitrariness out of the process because you can't trust that any given decision-maker will be benevolent. The more you make the system rules-based, the less leeway you give to decision-makers. That means you have to make the rules more precise. That puts a lot of pressure on whomever writes the rules and the structure of the rules to get those rules to approximate "justice" for the circumstances that they attempt to foresee.
Law, in a sense, is the worst of all elements in terms of artificial intelligence. You have all of the challenges of writing "code" to create your AI, but then the implementation of that AI system is carried out, not by a machine, but by people anyway, so the system can fail either because you wrote your code sloppily, or because your code wasn't carried out. Failure at every turn.
Then, you've got these critters called, "lawyers." Lawyers range in honesty from those who look for ways to exploit existing rules on behalf of their clients to... well... Michael Cohen. (Does Rudy Giuliani even count as a lawyer?) Cohen is an extreme example, but he's also not alone. There really are some dishonest lawyers out there who don't just exploit the rules-- they lie and break the rules. Law is one of those professions that rewards lying. Yes, you can succeed in law without being a shameless liar, but it is harder.
And this lovely, little AI system we have doesn't run on Von Neumann architecture. It runs on a bunch of little Michael Cohens. (Yes, I know that analogy doesn't quite work, but c'mon. It was funny, if you're a geek.)
Failures of our AI system attempting to approximate the benevolent king, then, occur both because of failures in the written code, and human behavior.
Solution? What solution? I have set this up as a choice between some AI that can't pass the Turing test, and the hypothetical benevolent king that doesn't, and never has existed. What, Solomon?! Here's the thing about "justice" in the abstract and in reality. Lots of people write about it, but the idea of there actually being agreement on the philosophical principles, or the type of person who could implement it... no. Bullshit.
So, instead, we have our stupid, little AI system called the US Criminal Code (hey, there's that word again!). We have a Department of Justice, and yadda, yadda, yadda. All of this is intended to take what would be, in a monarchical system, the unilateral decision of the king, and turn it into something that is supposed to be indistinguishable from the decision of a benevolent king, like a computer program passing the Turing test, and convincing an outside observer that it is a sentient being.
Then again, the whole point here is that Trump thinks he's king, and acts like it. So...
Oh, fuck it.