Well, duh. It's valid to state that it's dangerous to argue "torture doesn't work" because we may eventually find ways to make it work. It's the same point It's similar to premising arguments against the death penalty on its cost - if your opposition is in fact based on moral grounds, what do you do if we find a way to avoid the expense currently associated with death penalty trials and appeals? Switch gears and say, "No, really, ignore what I said before..." and try to salvage some shred of credibility? The response to that is to say "It's not likely that we'll find a way to avoid those costs, so I'll take the risk", but that doesn't exactly vindicate you on the moral front.
In terms of torture, Megan McArdle argues,
I've long said that we shouldn't waste time arguing that torture doesn't work. For one thing, the evidence for those arguments seems empirically shaky, especially since many people employing them insist on arguing that torture basically never works, rather than that it doesn't work very often and therefore has a bad cost-benefit ratio. For another, arguing that something doesn't work isn't necessarily an argument for not doing it--it could just as easily be an argument for improving our technique. And if advances in brain scanning research let us develop a reliable lie detector, as seems possible in the relatively near future, then torture will work very, very well.Superficially the argument is the same - we can use torture to get answers and we'll know they're true because the bran scan lights up in a particular way. We're assuming, I guess, that we'll find a way to fit a water board inside a PET scanner. Perhaps the torture would be administered by remote control, via a robot, so that the technician could avoid being exposed to undue levels of radiation. And how would we conduct studies to determine whether the torture was interfering with the results of the scan? Trial and error?
At risk of falling into my own trap, let's talk brain scans. Eventually we may develop a waterproof version of the PET scanner that does not expose people outside of the device to radiation (I'm assuming we're not too concerned about the person inside the device), and have gathered enough information from its use to screen out any effects on the brain scans due to torture. Well, that seems like it would be a lot of unnecessary, costly work because if we do have an effective lie detector that works off of brain scans... why would we actually need to torture?
Assuming you have a working lie detector test, do you need anything more sophisticated than "yes or no" questions to get virtually any information you require? If you have a bit of intelligence to work with, it's even easier. If we're imagining this sophisticated brain scanning lie detector technology, it may be that we don't even need an answer - we may be able to determine the answer to a binary question by the way the suspect's brain lights up just from hearing the question. It's much easier to apply a lie detector to a binary question than to a narrative, where truth may be mixed with fiction, and both may be mixed with pleas for mercy. ("He's telling the truth that there's a plan to bomb a U.S. military target, or that he wants us to 'please stop', maybe both.")
Let's also not forget what a lie is - it's deliberate deception. If I think I know the answer but am wrong, my "honest" answer will "pass" the lie detector test, but you will nonetheless get bad information. Let's say I'm an evil terrorist mastermind. A couple of things I might do:
Feed misinformation into the heads of people I will set up to be captured. They are caught, information is elicited and... it's all bad, but the "lie detector" says they think its true.
Occasionally brainstorm plans with my colleagues, from the pedestrian to the ludicrous, with no actual plan to carry them out. If somebody is captured, they "spill the beans" on all the plans, they think they're real (or may be real), and the government that obtained the information will spend thousands of man hours chasing shadows.
The reason it's difficult to debate on moral issues is that your opponents either don't see things the way you do, or simply don't care about the morality. Religions have a pretty simple way of resolving the moral dilemma - morality is what God tells you to do, and you can't question God. If the answer is, "I disagree," or "I think the tangible benefits outweigh the moral cost," or "I don't care," how can you really respond? Keep repeating yourself?
It's the difficulty of making a moral case that leads people to try to sidestep that issue, raising arguments of morality and efficacy. And yes, if you're not willing to change your position if the arguments you make are undone yet pretend that your opposition is based upon something other than morality, you're risking that a solution will be found and that you'll lose your credibility. Sure, you can raise the argument honestly - "I oppose torture on moral grounds, but consider the following..." - but you're still not making the case for what you believe to be the best reason to oppose torture. So do make the moral case. But at the same time, it's fair to point out that the type of technology we're prognisticating that might make torture "work" should also make it unnecessary.