If you ever thought about security in a certain context you probably discovered the connection between security and usability. Let's take email as an example. The tools to encrypt your mails are out there, they are free, and they give you (it is even in their name) pretty good privacy. So why does almost nobody use them? Right: The usability is zero.
It is possible to explain to an average computer user why it is necessary that they give a password to get mail from the mailserver. After all, they do not want anyone to be able to get their mail. It does not cross their mind that - in theory - anyone can read their email, because it is sent in clear text. Even if the connection to the mail server is encrypted, many inter-mailserver connections are still unencrypted. Secondly, the mailserver might get hacked. But from a regular user's perspective this is all regarded as "highly improbable". For them, if they need a password to get their mail, everyone does. Even though people know that servers can be hacked, like someone who never buys a ticket for the train or has been driving a car for ten years without a license and without getting caught, they think it can never happen to them, besides "there is nothing important in my mails mostly, anyway".
There is another reason for people not encrypting their mail. If you want to send sensitive information (like a password-reset or an account unlock-code) securely per email, you are mostly out of luck: you would need the other person's public key. Most people do not even have one, nor know how to get one. Also the concept of asymmetric encryption is not known by the average computer user. They might ask questions like "It's a key, how can it be public? They told me to never tell anyone my password." The problem in the example, mail, can be traced back to something simple: Email was not designed with encryption in mind.
Let us look at another system that seemingly was designed with integrated encryption: the mobile phone network. To be precise, the very first generation of mobile phone networks were not encrypted and anyone with a little technical expertise could easily listen to all the calls "on air". GSM and younger generations of mobile phone networks can use encryption, but it is often not turned on by the network providers. Also, weak encryption algorithms like A5/1 are used.
What is not provided by GSM and newer generations is authenticity. When calling someone, you can not be sure it is really them talking. Yes you recognize their voice, but what if they are whispering saying they have a sore throat? There are other, more technical possibilities too (like using a speech synthesizer). Even if you think you are absolutely sure who you are talking to, there is no guarantee that nobody is listening in ("man-in-the-middle"). This is because no mobile phone network provides end-to-end-encryption. Even though I did not check, I am pretty sure that the topic of end-to-end-encryption was never really discussed when designing new generations of mobile phone networks. National authorities are "used to" be able to tap people's phone even though in theory, people could always meet and discuss things in private and there is no good reason why a conversation on the phone should be less private.
Again, the problem is that the system was not designed with real security in mind - voice and data might be encrypted on the air, and the challenge to get a signal at 1-2GHz (that is not optimized for reception by you but for someone else) out of the air and then crack the encryption is a high-enough challenge to be sure that your every-day neighbour will not listen to your calls. However, if your line is wiretapped you are out of luck. Apparently, you do not need to know if police are listening to your calls, but if they want to search your house or listen to your conversations you are having at home they need a warrant, knock on your door and let you know. (They might need a warrant for a wiretap too, depending on where you live, but you will not know).
Sure. As of yet, there are no laws in most first-world-countries that prohibits making sure nobody can intercept, change or decrypt your conversations (though we might live to see that day). The argument here is simple: if you were required by law to make sure your phone calls, data transfers, video chats etc. can be monitored by the government, you could still just go and personally meet the person you want to talk to or that you want to exchange data with. It would be pretty absurd (though I am sure we will live to see the day when it is proposed) to request that nobody meet or talk in private... In short, the reasoning goes: "If I can talk to my friend in private by meeting him anyway, why should I not be allowed to talk to him in private on the phone?" This is already slightly undermined by the fact that even most first world countries nowadays record metadata about every digital conversation, while they of course can not know when someone is talking to someone else personally. This probably explains why there is little (or no) effect on crime statistics by programs like the german "VDS" (Telecommunications data retention).
While new communication systems can certainly be designed to ensure both authenticity and confidentiality, there will always a tradeoff between security and usability. It is possible to do all communication (data, voice, video....) encrypted and authenticated by default at no extra cost of usability by a well-implemented system. Well, almost no extra cost. When sending someone a message, the system would still need to know some key, probably a public key, of the receiver. In a well-designed system this could be fetched automatically, however, until sender and receiver meet in person and make sure each one got the correct key - that the key was not swapped with some other key by a person that wants to listen to the traffic - the sender just knows that they are sending encrypted traffic. They do not know if they use the correct key for encryption.
An example of such a well-designed system is Whispersystems' TextSecure and Redphone. However, even they can not escape the final challenge, to get users to verify the public keys. Mostly this is done by comparing a "fingerprint", which is a hash of a public key. It has been found that the crypto-jargon scares people, so I will try to explain it in layman's terms.
In all these inherently-secure systems that automagically give confidentiality and authenticity, somewhere, there is a key used to encrypt messages. This key is provided by the receiver of the messages and it is different for every receiver (i.e. "for every contact in your phone book, there is one public key"). With this key, the so-called "public key", you, as a sender, can only encrypt the message. The receiver can decrypt the messages that are encrypted with the public key with their "private key". However, how do you know that the "public key" you got is actually you friend's and not just someone else's, who would then be able to decrypt your messages, modify them and then re-encrypt them with your friend's public key and send them to your friend ("man-in-the-middle"). Note that a public key is public, everyone can have it. So to make sure you got the correct key you have to go to your friend and ask him, "is this your public key?". Then he can look at it and check. Mostly, since public keys are very long and not easy to read, "fingerprints" are compared. Think of it as your thumb's fingerprint - no other thumb on the whole world will have the same fingerprint, so if you see two thumbs, it is enough to compare the fingerprints to tell whether they are the same thumbs or two different thumbs, you do not need to compare the thumbs.
So where's the tradeoff in the previous example? Well, when the app/program asks you if "this is the correct fingerprint" of your friend's public key, as a user you can either say "okay" and not think about it, or you can go the extra mile to learn what the program actually wants you to do. In the first case you can just hope that nobody messed with your keys, in the second case you can make sure.
There is always a balance between security and usability, or maybe security and lazyness. It is true that you can get a lot further than nowaday's standard systems (E-mail, phone, web-browsing...), i.e. you could start from a much higher level of security with the same usability (by clever design it might even improve, such as only having to log in to your computer once - anywhere, on any operating system - and you would automatically be recognized on all websites where you wanted to be recognized, your mailserver would not ask for a password, creating a new account would not need yet-another-password....). But if you want better security - i.e. you do not just hope that nobody fiddled with the public keys that you received, thinking they are your friend's. And suddenly you have to compare two fingerprints, which can be tiresome and requires attention to detail - if just a single letter or digit is different, something is sketchy and the key can not be trusted.
But it does not stop at fingerprints. Applications get older, updates might not be on time, there might be new results in cryptanalysis. One day, a cipher might be broken or suddenly considered unsafe and the user would have to maybe dive deep into the jungle of cryptography-related options and change some default-cipher. Just open your favourite browser's preferences and look for anything cryptography-related. You will probably not understand anything there and just leave it as it is - not knowing whether the settings are actually still considered secure. In Firefox, you can even open a new tab, type "about:config" into the url bar, confirm and then search for "security" in the search field on the config page. Not even cryptography experts can be sure that there is not a problem in this particular combination of the settings.
If you are still a bit in the blue about where the tradeoff actually is when it comes to security: Passwords are the most obvious example. You might have noticed "annoying" sites, requiring certain password length, maybe even a digit and upper/lowercase letters, special symbols... If you want better security, you have to follow all these rules:
Think following this is hard? Well that was just about passwords. There is a set of rules for how to read email to avoid viruses (which can - among other things - steal passwords that you have saved in your browser and elsewhere - these are not encrypted, unless you took extra steps), there is a set of rules how to select software when you need something and download it from somewhere you can not really trust and much more.
If you want security - authenticity, confidentiality - i.e. if you want to be sure that the messages/data you receive are sent by the one who claimed to have sent it and you want to be sure that nobody can read your conversation and/or access your data: be prepared to invest quite some time and effort into it.