• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

NSA Seeks Holy-Grail of Spy-Technology

If privacy is a concern then efforts would be better placed pushing against the actual methods of data gathering (cameras in public places, the tracking of Internet IPs, phone tapping, etc.).

A faster way to compile data already gathered merely reduces response time, it doesn't magically grant knowledge beyond the sources.
 
Exactly. This research is towards better data analysis. As I've said several times, it does NOT make anything available to the government to which they do not already have access.

Any conclusion reached by this software---assuming it works---could be reached by a human analyst tasked to look at a particular person without it. The software simply makes the process faster and more efficient.

And that's ignoring the fact that we're only talking about one possible use-case here. There are many others. If you stifle any technology which could possibly be put to a bad use, you'd be back in the stone age.
 
Last edited:
^Except analysis is the current choke point. For example, it's been said the NSA has the capability to monitor any piece of digital communication, satellite, internet, phone calls. The CIA, as outlined in reports on 9/11 and the ongoing US wars, has a large number of informants and operatives that cull huge amounts of unfocused data. The difficulty is turning the vast amount of data into usable information. Speed up processing, and keeping staffing levels equal, allows you to refine more information. Say you've got 100 staffers, which could process and filter 1000 cases in a day, cut the processing time in half you can either fire half your staff to maintain the current caseload, or keep the people and double the daily load to 2,000 a day.

Nevermind what kind of creative methods that can't be utilized without this type of technology.
 
Don't worry too much. Everyone else will be splayed open for all to see as well. It will at least be fair, democratic in a way unimaginable to the Greeks who coined "democracy" in the first place. And you'll have really nice stuff too.
Are you kidding me? The people who work at the NSA and those in power would be able to see everything about us, but they would sure as hell not let us see that information about them.


I was kidding about the nice stuff. However, like I said, government invasion of privacy is our least concern. The government is shackled by existing law on what it can and cannot do. The problem is with non-governmental entities. Corporations often get flack for this, but I honestly think it's people themselves that are the greatest threat to their own privacy. They post online, for the entire planet to see, what they do minute by minute through the whole day, and who they did what with. That normalizes the kind of Orwellian openess you seem to so fear.

Once it's okay to tell everyone what you're doing all the time, and all your friends start doing it too, they'll see no problem when the products they buy automatically start tracking their behavior for them. They'll actually think it's extremely convenient, even moreso when the products start intelligently recommending what they could be doing next. Corporations have already started, and will continue to gather vast amounts of information about our likes, dislikes, activities and friends. Facebook is 1984 Inc. Only when it is socially acceptable for everyone else around us to know what we're doing, and the corporations to know us better than we know ourselves, only when the entire civilian sector has absorbed us into the "collective" will the government be allowed the same level of invasiveness.

In the end, the situation will be quite similar to my previously mentioned small villiage, where everyone knew everyone else. You knew your neighbors entirely, you grew up together and you know their dark secrets, even if you didn't talk about it. Almost like Mayberry, but less human. Which means, you may not have the dirt on everyone in the NSA, you'd be able to dig up a lot about the man in black that knocks on your door.

I think it is common to think that large organizations (NSA for example) are monolithic blocks of like-minded individuals with agendas counter to yours. It's simply not true. The NSA is just a group of people, which is why it couldn't get away with the recent wiretapping scandal. The front-line workers smelled something rotten, and though it took time, they brought it into the sunlight. Nothing in a free society stays secret forever, nothing important at least.

However, I do wonder what you expect me to do given the situation I see the world heading. Really, what do you think I should do about something that's inevitable? This isn't something I can write my congressman about. I can't go around telling people that Twitter, Myspace and Facebook is the trifecta of evil. It's a social movement and the only way it can or will stop is when society at large chooses to do so of their own accord. The way it will stop is when people stop publishing so much of themselves, and I already do that. So again, what do you propose?
 
Last edited:
^Except analysis is the current choke point. For example, it's been said the NSA has the capability to monitor any piece of digital communication, satellite, internet, phone calls. The CIA, as outlined in reports on 9/11 and the ongoing US wars, has a large number of informants and operatives that cull huge amounts of unfocused data. The difficulty is turning the vast amount of data into usable information. Speed up processing, and keeping staffing levels equal, allows you to refine more information. Say you've got 100 staffers, which could process and filter 1000 cases in a day, cut the processing time in half you can either fire half your staff to maintain the current caseload, or keep the people and double the daily load to 2,000 a day.

Nevermind what kind of creative methods that can't be utilized without this type of technology.

Which just goes back to what I said before---the need for the technology exists, ergo someone is going to figure it out, whether this particular group does it or not. That's just how the research community functions.

Look at the UK. They have far more CCTV monitoring there than we do, per unit area. Recent analyses have shown that this capability hasn't done a whole lot to reduce crime, mostly due to the massive amount of video----too much for anyone to actually watch all of it. Computer assistance in determining when something unusual or suspect is occurring would go a long way towards improving that value of that investment.
 
Kv1at3485,

Removing a great many of the cameras in public places would obviously be a good idea, but the government could always put smaller hidden ones in their place.

As for preventing electronic eavesdropping, that's much harder to enforce. It can be dune surreptitiously, and in secret. The only way we'd know is if someone came forward.


Lindley,

This gives the government a lot more than it currently has. The ability to gather the data is already there, but using A.I. to be able to organize it, compile a dossier on everybody so detailed that it could even gain insights into what the person is thinking, what they will and will not do. Yes, a human being will of course have to ask the computer for the information, but this would provide the person with everything there is to know about everybody.

It is completely excessive, it's antithetical to a free-society, and this kind of development will totally destroy any concept of privacy and may even usher in a "Minority Report" kind of society.

Of course it may have some good uses, but the disastrous misuses that it would bring, which I already mentioned, far outweigh the benefits.

If I cannot convince you why this is a bad idea, and cannot convince you why it's bad to have a society which is constantly being monitored, has no freedom from unreasonable search and seizures, I don't think anyone can convince you why this is indeed a bad idea.

As for your argument that if we regulate or stifle any technological development we'd be back in the stone-age, that's is simply nonsense -- it's not true. We wouldn't suddenly regress to the stone-age. Now, I don't believe we should stifle *all* technological development, but there are some doors that probably shouldn't be opened, but not every technological capability should or must be embraced.

For example, I feel it is perfectly appropriate to regulate or stifle technological development that would be used to say develop a weapon that could be designed to selectively kill only ethnic groups while not harming the others. Such a weapon would be unconscionable and truly evil.

As for your comment about the UK having loads of CCTV cameras in greater amounts than we do, you probably would find it interesting to know what it's original purpose was for. The idea wasn't to stop crime, it was in the event that a terrorist act was perpetuated, they could find out who did it.

Regardless, though, the government should not be placing so many cameras in public. They shouldn't be going around monitoring every person or thing that moves in a public place. There are many people, including a number of men and women at the ACLU who consider the UK to be one of the most pervasive surveillance societies. Among this listing of the worst includes China, a country with a lengthly history of egregious civil rights violations.

All of this surveillance stems from the desire to be safe, and I understand that, but there is no such thing as 100% safety existing with anything greater than 0% freedom. To be honest, I'd rather have a little risk in life and have some freedom from big-brother monitoring every aspect of my existence, than live in a perfectly safe society and not have any privacy or any freedom.


STR,

You are obviously not comprehending a key piece of information. Because you can do something, does not mean you should. The NSA should not be monitoring every single American's phone-call, internet-transmissions, e-mails, and such.

As for the government being bound by all sorts of laws... that pretty much disappeared under Bush. The NSA pretty much monitored anything and everything. Something they are not supposed to be doing, they didn't have any warrant, the President simply felt that as unitary executive, decider, and dictator, that he can do it, he had the power and did it.

Non government entities violating privacy rights is a good point to raise, but if that entity is giving the government the data, doesn't that sort of make them complicit in the act?

As for the people communicating online, we're a communicative species. We talk, we chat, we post pictures. Granted I think a lot of people post way too much about themselves, which is indeed a problem. Still, the fact remains that our government is not supposed to start monitoring anything and everything. This is not the Soviet Union, this the United States of America for godssakes!

And I know the NSA is not a monolithic organization in which every person thinks exactly the same. There are individual differences, between individuals and there are some people even in the NSA, which actually have something called a conscience and give a damn about things like civil-liberties. The fact that a couple of them came to their senses and reported it doesn't make what the NSA did was right or that the bulk of their workers are ethical, it simply meant a few people in there had a conscience. I would not be surprised if the NSA learned the wrong lesson from all of this and is now making damn sure that such people are never involved with domestic-surveillance ever again so they won't get the whistle blown on them again.


CuttingEdge100
 
I simply think you're drawing the line at the wrong place. The research described on that Carnegie Mellon website stops well short of your worries.

The article at the start of the thread describes several possible extensions to the concept, but I haven't yet managed to find confirmation on a technical level of ongoing research towards the aspects you're so concerned with. It's possible it might be classified.
 
OP, can you link to an article on this from a legitimate source? Where are the quoted portions of your post coming from?
 
OP, can you link to an article on this from a legitimate source? Where are the quoted portions of your post coming from?

The article shows up on PBS' Nova site. I don't know if that's where it originated, but it's the most reputable place I could find it.
 
Lindley,

So you're saying that we have to develop this technology, because of the potential that others may or may not develop this technology?

I really hate that argument, it's an appeal to fear, and it's often a fear that's over-inflated to justify whatever is being developed.

Something like this I feel inevitably would be misused with disastrous consequences by whoever develops it -- with that said, we shouldn't develop it. There's a possibility no other nation will develop it, and if they do, so what? I don't really care because we'd be just as equally screwed -- does it matter if we're screwed by our own government or another if the level of abuse is equal?

There are certain things that we shouldn't develop or pursue simply because it's not right. There are certain roads we just shouldn't go down.


CuttingEdge100
 
No, I'm saying that algorithmic advancements have a way of occurring when the necessary precursors have become available. That's why there are so many cases of simultaneous development of the same concept by multiple researchers.

If DoD doesn't pay someone to figure out the necessary equations, then someone at MIT or CalTech will figure it out anyway; it'll just take a few years longer (at most). And when that happens, the paper will be openly circulated in the academic community, which means everyone will have it anyway. That's why DARPA exists----to try to give the US a slight edge in developing this stuff, not to decide whether or not this stuff gets developed at all.

There is NO CHANCE, none at all, that this technology will never be developed by anyone unless it's actually impossible. There is, however, plenty of question as to who will develop it and how it will be used. The first is a matter of science and funding; the second is a matter of politics.

Stop trying to pretend that blocking the advance of science is the answer. It never is. If you're concerned, you'd do well to familiarize yourself with the political arena, because that is where you might actually make an effective difference in how such developments are used.
 
Lindley,

Stop trying to pretend that blocking the advance of science is the answer. It never is.

I never said that I'm for blocking *all* scientific advancements. What I am saying is that there are certain scientific developments that should not be pursued for one reason or another. Do you think the development of a bioweapon that can be created to kill certain ethnic-groups should be developed? I personally don't think so.

If you're concerned, you'd do well to familiarize yourself with the political arena, because that is where you might actually make an effective difference in how such developments are used.

I definitely agree that laws need to be created to govern the use of these kind of technologies. They have to be reasonable and effective. Treaties need to be created between countries for the same purpose.
 
That's a completely different scenario. We're talking here about the development of algorithms which have dozens or hundreds of possible beneficial uses. That they could possibly lead to reduced privacy is merely one use of many.
 
I've got a bad feelin' about this.

One could say if you have nothing to hide, why worry...then again how much does the general public know about what goes on within government agencies...who seems to always have something to hide...but that is for our security...right?
 
The system as described only defeats "security through obscurity", which is a myth anyway. That's all. It's an algorithm for correlating publicly available data, nothing more.

I'm fairly sure I've said this previously.

Intelligent, automated data correlation is an incredibly useful tool in numerous arenas. Imagine, a program which could analyze fault sensors in a nuclear reactor and intelligently report developing problems and potential solutions far faster than they would likely be noticed or analyzed by human operators.
 
The system as described only defeats "security through obscurity", which is a myth anyway. That's all. It's an algorithm for correlating publicly available data, nothing more.

I'm fairly sure I've said this previously.

Intelligent, automated data correlation is an incredibly useful tool in numerous arenas. Imagine, a program which could analyze fault sensors in a nuclear reactor and intelligently report developing problems and potential solutions far faster than they would likely be noticed or analyzed by human operators.

That is great and all and I am all for technology to be used in the right way. Just have a bad feelin' when I read about stuff like this.
 
Lindley,
That's a completely different scenario. We're talking here about the development of algorithms which have dozens or hundreds of possible beneficial uses. That they could possibly lead to reduced privacy is merely one use of many.

Actually, it's not really. You basically said in the previous scenario that blocking any advance of science and technology never is the right answer.

I'd say that is an application of science and technology that is so immoral and reprehensible, that I would feel completely comfortable in blocking it's advance. I think most people would agree with me on that.


Jetfire,
I've got a bad feelin' about this.

Agreed

...then again how much does the general public know about what goes on within government agencies...who seems to always have something to hide...but that is for our security...right?

Exactly


Lindley,
The system as described only defeats "security through obscurity", which is a myth anyway. That's all. It's an algorithm for correlating publicly available data, nothing more.

No it doesn't, and I'm surprised a person as intelligent as you would not see through that. This technology would feature artificial intelligence that would almost certainly have to be capable of a pretty good degree of inferential reasoning.

Even if I don't know everything about you, If I have a sufficiently large amount of detailed information about you, I can fill in the blanks to the point that I'd have virtually a complete picture.

Intelligent, automated data correlation is an incredibly useful tool in numerous arenas.

Yes, but who sets into place ethical guidelines on when it can or should be used?

Because I can't see many people being able to resist the temptation to misuse this.

I certainly do not see the intelligence agencies as possessing the capability or willingness to not misuse this kind of technology. They have enormous powers of secrecy, and with that the ability to do what they want without being caught, they have an inherent disregard for privacy (which kind of goes with their job) and a willingness to spy on American citizens (The illegal warrantless wiretapping program was a prime example -- they didn't just spy on Americans calling other countries, or vice versa, they spied on Americans calling Americans) even though they are not supposed to. Only making matters worse is that they are often resistant to proper oversight.

Such power in those hands would be virtually predestined to be abused, and abused egregiously.


Jetfire,
That is great and all and I am all for technology to be used in the right way. Just have a bad feelin' when I read about stuff like this.

I have no problem with technology used in a proper and ethical manner. I have a problem when technology is used in a grossly unethical manner to spy on innocent people, and potentially violate every last vestige of privacy we have.

No good can come from a government trying to gather everything there is to know about everybody (including all it's citizens). The potential for abuse is virtually certain.
 
There's a question to be asked here: If a computer program has extrapolated a complete model of an individual's preferences and wonts, but no human ever queries that model, has privacy been violated?

I'm in favor of increased automated analysis of the data which human analysts look at now, because it means that less of that data will actually be looked at by a human; only those individuals whom the software flags as potential threats will receive human scrutiny.

There will always be a tradeoff between privacy and security, and it seems to me that more intelligent computer analysis is the direction which provides the greatest increase in security for the least decrease in privacy.

This does, of course, assume that that software works in an ideal manner and the agencies trust it to do so, which is probably years away at this point.
 
Last edited:
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top