Cloud Computing - Biggest Mistake unsolved
Published: Wednesday, January 23, 2019 written by Andy Flagg
View Count: 232
Keywords: aws, azure, google cloud, virtualization
The biggest problem with this approach is that the network connectivity between the client(s) that actually manage the content (push - pull) is not connected at high speed (gigabit or higher). One can conceptualize the value of resources in the cloud as awesome if your content creators are sitting on the wire in the cloud either inside or in a hand off of sort of equal bandwidth in the data center. that is why transnational data workloads need to have redundancy as well but then again, who is actually working on the data.
Until Fiber to the Curb (FTTC) for every home, household and business exists, then I believe diagrams like this are great if you are only having content pulled down. the 1% of those working to push content back up and forth need extreme performance.
in the latest news, The City of Reno Nevada with their move of their local application and data servers to a SWITCH data center offsite just needs to be careful about efficiencies in application performance, and cost effectiveness. While the IT Director does make a minor point about their sleepless nights and concerns of having data on premises, that can be averted with great backups, redundancy, and 24/7 SLAs of 99.99% uptime from vendors.
more to come...
if you found this article helpful, consider contributing $10, 20 an Andrew Jackson or so..to the author. more authors coming soon
FYI we use paypal or patreon, patreon has 3x the transaction fees, so we don't, not yet.
© 2020 myBlog™ v1.1 All rights reserved. We count views as reads, so let's not over think it.