APM+ Transforming The Way .NET & Java Developers Optimize Application Performance. Providing more than any traditional application performance management solution
Fonte: Get code-level application performance management insights
APM+ Transforming The Way .NET & Java Developers Optimize Application Performance. Providing more than any traditional application performance management solution
Fonte: Get code-level application performance management insights
A career is a marathon, not a sprint
Most success comes from repetition, not new things
Deprioritise your career when your kids are young
If you have skills, commitment and passion, careers tend to take care of themselves.
You can also miss the chance to learn. Children teach you a lot more than you teach them. They give you a second chance to see the world for the first time through their eyes.
Never work for horrible bastards
Life is way too short to tolerate really bad bosses. If you find yourself working for one, unless you are desperate or starving, start looking for a new job. Immediately. Then sack the bad boss. By leaving.
In the workforce, always act like you are 35
Recognise that staff are people with finite emotional capacity
Never sacrifice personal ethics for a work reason
Crucial to workplace happiness is value alignment. If you work somewhere that compromises your personal ethics and values, get out of there as quickly as you can. Good people will be unnerved by things that don’t feel right. If it doesn’t feel right, it probably isn’t. Bad things only manifest when good people don’t take a stand.
Fonte: The career advice I wish I had at 25 | Shane Rodgers | LinkedIn
16 ferramentas gratuitas e não óbvias para turbinar a gestão de TI Nov 19, 2015656 views43 Likes10 CommentsShare on LinkedInShare on FacebookShare on Twitter A escolha de uma ferramentas para gestão de serviços de TI é sempre baseada em referências. Seja no quadrante mágico do Gartner, nas pesquisas no Google ou na indicação de outros profissionais. E ai chegamos sempre no mesmo grupo de ferramentas que possuem a ‘filosofia’ ITSM. Nenhuma delas é completa. E você só vai ouvir esse discurso de um represen
Check this out: Introduction to Microservices | NGINX www.nginx.com/blog/introduction-to-microservices/ Sent from my iPad
Check this out: Virtustream Storage Cloud www.virtustream.com/cloud-iaas/virtustream-storage-cloud Sent from my iPad
Check this out: Facebook Did It, Slack Did It, Telegram Did It Too: Chatbots for the Enterprise – DZone Integration dzone.com/articles/facebook-did-it-slack-did-it-telegram-did-it-too-h?utm_content=bufferee2a5&utm_medium=social&utm_source=linkedin.com&utm_campaign=buffer Sent from my iPad
Pessoas de alto potencial só ficam em algum lugar na combinação de três elementos: Perspectiva (aonde posso chegar) x Aprendizado (estou me desenvolvendo?) x Relação (admiro meu chefe? meus colegas?). blog.neoassist.com/o-que-escutei-de-um-head-do-google/ Sent from my iPad
Check this out: Forrester names Microsoft Azure a leader in Big Data Hadoop Cloud Solutions azure.microsoft.com/en-us/blog/forrester-names-microsoft-azure-a-leader-in-big-data-hadoop-cloud-solutions/ Sent from my iPad
Check this out: Beyond Jenkins: 7 devops tools www.techworld.com.au/article/600150/beyond-jenkins-7-devops-tools/ Sent from my iPad
Check this out: What is software product management? t.co/u4IpwxzhPY Sent from my iPad
At WorldSense we build predictors for the best links you could add in your content by creating large language models from the World Wide Web. In the open source world, no tool is better suited for that kind of mass (hyper)text analysis than Apache Spark, and I wanted to share how we set it up and run it on the cloud, so you can give it a try.
Spark is a distributed system, and as any similar system, it has a somewhat demanding configuration. There is a plethora of ways of running Spark, but I will try to describe the one that I think offers the best trade-off nowadays: a standalone cluster running (mostly) on bare-bones Amazon EC2 spot instances configured using the newest Docker orchestrations tools.
Before we start, let us double check what we need:
We will move backwards through this list, as it makes it easier to present the different systems involved. We allocate our machines with Docker Machine, using thevery latest docker engine version, which contains all the functionality we need. Let us start with a very small machine:
1
2
|
DRIVER_OPTIONS=“–driver amazonec2 –amazonec2-security-group=default –engine-install-url https://test.docker.com”
docker–machine create $DRIVER_OPTIONS —amazonec2–instance–type=t2.nano ${CLUSTER_PREFIX}ks
|
We will use that machine for Consul, an atomic distributed key-value store,inspired by Google’s chubby. Consul will be responsible for keeping track of who is part of our cluster, among other things. Installing it is trivial, sincesomeone on the internet already packed it as a Docker container for us:
1
|
docker $(docker–machine config ${CLUSTER_PREFIX}ks) run –d –p “8500:8500” –h “consul” progrium/consul –server –bootstrap
|
This takes a few minutes to start, but you should only really need to do that once per cluster¹. Every time you bring the cluster up you can point to that same Consul instance, and keeping a t2.nano running will cost you less than five bucks an year.
Now we can instantiate the cluster’s master machine. The core responsibility of this machine is coordinating the workers. It will be both the Spark master machine and the manager for our Docker Swarm, the system responsible for presenting the machines and containers as a cluster.
1
2
3
4
5
6
|
NET_ETH=eth0
KEYSTORE_IP=$(aws ec2 describe–instances | jq –r “.Reservations[].Instances[] | select(.KeyName==\”${CLUSTER_PREFIX}ks\” and .State.Name==\”running\”) | .PrivateIpAddress”)
SWARM_OPTIONS=“–swarm –swarm-discovery=consul://$KEYSTORE_IP:8500 –engine-opt=cluster-store=consul://$KEYSTORE_IP:8500 –engine-opt=cluster-advertise=$NET_ETH:2376”
MASTER_OPTIONS=“$DRIVER_OPTIONS $SWARM_OPTIONS –swarm-master -engine-label role=master –amazonec2-instance-type=m4.large”
MASTER=${CLUSTER_PREFIX}n0
docker–machine create $MASTER_OPTIONS —amazonec2–instance–type=m4.large $MASTER
|
There are a few interesting things going on here. First, we used some shell-fu to find the IP address of our Consul machine inside the Amazon network. Then we fed that to the swarm-discovery and cluster-store options so Docker can keep track of the nodes in our cluster and the network layout of the containers running in each of them. With the configs in place, we proceeded to create a m4.large machine, and labeled it as our master. We now have a fully functional 1-machine cluster, and can run jobs on it. Just point to the Docker Swarm manager and treat it as a regular Docker daemon.
1
|
docker $(docker–machine config —swarm $MASTER) run hello–world
|
To install Spark on our cluster, we will use Docker Compose, another tool from the Docker family. With Compose we can describe how to install and configure a set of containers. Starting from scratch is easy, but we will take a shortcut by using an existing image, gettyimages/spark, and only focus on the configuration part. Here is the result, which you should save in a docker-compose.yml file in the local directory.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
|
version: 2
services:
  master:
    container_name: master
    image: gettyimages/spark:1.6.0-hadoop-2.6
    command: /usr/spark/bin/spark-class org.apache.spark.deploy.master.Master -h master
    hostname: master
    environment:
      – constraint:role==master
    ports:
      – 4040:4040
      – 6066:6066
      – 7077:7077
      – 8080:8080
    expose:
      – “8081-8095”
  worker:
    image: gettyimages/spark:1.6.0-hadoop-2.6
    command: /usr/spark/bin/spark-class org.apache.spark.deploy.worker.Worker spark://master:7077
    environment:
      – constraint:role!=master
    ports:
      – 8081:8081
    expose:
      – “8081-8095”
  networks:
    default:
      driver: overlay
|
There are a lot of knobs in Spark, and they can all be controlled through that file. You can even customize the spark distribution itself using aDockerfile and custom base images, as we do at WorldSense to get Scala 2.11 and a lot of heavy libraries². In this example, we are doing the bare minimal, which is just opening the operational ports to the world, plus the spark internal ports to the rest of the cluster (the expose directive).
Also note the parts of the config referring to the overlay network. The default network is where all services defined in the config file will run, which means they can communicate with each other using the container name as the target hostname. The swarm scheduler will decide for us on which machine each container goes, respecting the constraints we have put in place. In our config file, we have one that pins the master service in the master machine (which is not very powerful) and another which keeps the workers outside that machine. Let us try bringing up the master:
1
2
3
|
eval $(docker–machine env —swarm $MASTER)
docker–compose up –d master
lynx http://$(aws ec2 describe–instances | jq –r “.Reservations[].Instances[] | select(.KeyName==\”$MASTER\” and .State.Name==\”running\”) | .PublicDnsName”):8080
|
So far we have bootstrapped out architecture with Consul, defined our cluster with Docker Swarm and delineated our spark installation with Docker Compose. The last remaining step is to add the bulk of the machines which will do the heavy work.
The worker machines should be more powerful, and you don’t have to care too much about the stability of the individual instances. These properties make workers a perfect candidate for Amazon EC2 spot instances. They often cost less than one forth of the price of a reserved machine, a bargain you can’t get elsewhere. Let us bring a few of them up, using docker-machine³ and the very helpful gnu parallelⴠscript.
1
2
3
|
WORKER_OPTIONS=“$DRIVER_OPTIONS $SWARM_OPTIONS –amazonec2-request-spot-instance –amazonec2-spot-price=0.074 –amazonec2-instance-type=m4.2xlarge”
CLUSTER_NUM_NODES=11
parallel –j0 —no–run–if–empty —line–buffer docker–machine create $WORKER_OPTIONS < <(for n in $(seq 1 $CLUSTER_NUM_NODES); do echo “${CLUSTER_PREFIX}n$n”; done)
|
You now have over 300 cores available in your cluster, for less than a dollar an hour. Last month in WorldSense we used a similar cluster to process over 2 billion web pages from the common crawl repository over a few days. For now, let us bring up everything and compute the value of pi:
1
2
3
|
eval $(docker–machine env —swarm $MASTER)
docker–compose scale master=1 worker=10
docker run —net=container:master —entrypoint spark–submit gettyimages/spark:1.6.0–hadoop–2.6 —master spark://master:7077 —class org.apache.spark.examples.SparkPi /usr/spark/lib/spark–examples–1.6.0–hadoop2.6.0.jar
|
In a more realistic scenario one would use something like rsync to push locally developed jars in the master machine, and then use docker volume support to expose those to the driver. That is how we do it in WorldSenseâµ.
I think this is a powerful setup, with the great advantage that it is also easy to debug and replicate locally. I can simply change a bit the flagsⶠin these scripts to get virtually the same environment in my laptop. This flexibility has been helpful countless times.
Many companies offer hosted solutions for running code in Spark, and I highly recommend giving them a try. In our case, we had both budget restrictions and flexibility requirements that forced us into a custom deployment. It hasn’t come without its costs, but we are sure having some fun.
Ah, talking about costs, do not forget to bring your cluster down!
1
|
docker–machine ls | grep “^${CLUSTER_PREFIX}” | cut –d\ –f1 | xargs docker–machine rm –y
|
Footnotes
https://www.gartner.com/doc/reprints?id=1-36UFTCZ&ct=160517&st=sb
Check this out: Product teams for hardware products buff.ly/27j9nNM Sent from my iPad
www.agileandart.com/2016/05/04/how-i-reduced-my-cloud-cost-using-google-cloud/ Sent from my iPhone
This comprehensive guide covers the history of Google Cloud Platform, the products and services it offers, and where it fits in the overall cloud market.
Fonte: Google Cloud Platform: The smart person’s guide – TechRepublic
www.oreilly.com/data/free/files/2015-data-science-salary-survey.pdf Sent from my iPhone
www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf Sent from my iPhone
csunplugged.org/ Sent from my iPad
https://www.linkedin.com/pulse/intel-migrates-17000-vmware-vms-openstack-astonishing-ken-proulx
The highlights of the Intel report include:
$21 million in cost savings (to date)
Virtual server provisioning was reduced from 90 days to 30 minutes (yes, you did read that correctly)
90% reduction in provisioning service tickets
90% reduction in developer wait time for infrastructure
Delivery of self-service automation of server, storage and network provisioning
Enablement of Agile Methodologies, DevOps, and Continuous Integration / Continuous Deployment (CI/CD) and dramatically reduction in time-to-market for applications
They are on the path to achieving these benefits.
IT resource reduction – by removing 2,750 VM’s through shifting standalone databases to their Database as a Service offering, and by reducing their VM footprint through Phase II migration, consolidation and leases.
Manual intervention – IN the 2014 calendar year, hosting fielded 8,400 manual service requests accounting for approximately 190,000 hours spent awaiting fulfillment. Through rollout of their OpenStack control plane architecture in 2015 and the Phase II importation of legacy instance metadata, Intel IT foresees achieving its goal of 80% of routine service requests being fulfilled instantly and through automation. By the end of 2016, with Phase III, Intel IT intends to hit a 90% goal.
Accelerated time to market – OpenStack, by virtue of being an open source project, yields direct control over the capabilities that business demands and is forward leaning in terms of application / service development, delivery, and operations. Factors which weigh heavily in favor of OpenStack included it being geared toward Agile Methodologies, DevOps, and Continuous Integration / Continuous Deployment (CI/CD).
Automation – It is in this regard that open source community roots of OpenStack really pay dividends for Intel IT in terms of being an open automation platform defined by its API’s. The Intel IT hosting team is able to leverage the same toolchain used by the OpenStack community for developing, buildings, validating, and deploying its data center operating system. This translates directly into enhancing the velocity with which Intel IT are able to deliver as well as the overall quality and efficiency with which they run the environment at scale.
Optimized Hybrid Cloud – 2016 goal is for Intel IT Hosting to automate public and private cloud utilization so that applications move to the optimal cloud based on business needs and workload requirements. Through OpenStack as well as through its Cloud Foundry based Paas offering, Intel IT is increasing productivity of its users, enabling it to extend the value of private cloud to additional groups and usages, thereby supporting its technology roadmap for utilizing hybrid (private-public) clouds to further increase scalability and cost efficiency.
Datacenter Best Practices – through experience gained from this transformation, Intel IT Hosting best practices with software defined infrastructure are shared with Intel Corporations’ customers and the OpenStack community.
Capability vs. Budget – with all of the efficiency benefits, time to market advantages, computer utilization increases, and automated operations, Intel expects to see tremendous capability improvements at lower costs.
Almost everyone can agree that big data has taken the business world by storm, but what’s next?  Will data continue to grow?  What technologies will develop around it? Or will big data become a relic as quickly as the next trend — cognitive technology? fast data? — appears on the horizon.  Let’s look at some of the predictions from the foremost experts in the field, and how likely they are to come to pass.Data volumes will continue to grow. There’s absolutely no question that we will continue generating
Only time will tell which of these predictions will come to pass and which will merely pass into obscurity. But the important takeaway, I believe, is that big data is only going to get bigger, and those companies that ignore it will be left further and further behind.
As always, let me know your thoughts on the topic, please share them in the comments below.
Thank you for reading my post. Here at LinkedIn and at Forbes I regularly write about management, technology and Big Data. If you would like to read my future posts then please click ‘Follow‘ and feel free to also connect via Twitter, Facebook, Slideshare, and The Advanced Performance Institute.
You might also be interested in my brand new books:
Fonte: 17 Predictions About The Future Of Big Data Everyone Should Read | Bernard Marr | LinkedIn
www.r2d3.us/visual-intro-to-machine-learning-part-1/ Sent from my iPad
Docker has hit the systems scene with great fanfare. It’s a very exciting advancement for systems, but there are some key misunderstandings around it.
https://valdhaus.co/writings/docker-misconceptions/
Google will trot out cloud computing chief Diane Greene and a raft of new features at its global user conference starting Wednesday, in an effort to catch up to Amazon and Microsoft.
Fonte: Google’s aggressive new bid to move ahead in the cloud
OpenStack code developed by spin-out company nets 70% capex, 40% opex cuts
Fonte: DreamHost replaces VMware SDN with open source for big savings
Half-a-billion people stored files on Dropbox. Well, sort of. Really, the files were in Amazon’s cloud. Until Dropbox built its own. And threw the switch.
Fonte: A história épico do Êxodo do Dropbox Do Império Amazon Cloud | WIRED
Through Big Data as a Service (BDaaS), small to medium businesses (SMBs) can have the benefit of Big Data without the exorbitant expense of a full-time staff member.
Fonte: Big Data as a Service: the Next Big Thing? – OpenMind
From Cloud Computing, a Flipboard magazine by Francis Reyes
As friendly of an online advertisement as you’ll find.In mid-August, the first commercially available ZFS cloud replication target…
Sent from my iPad
From Cloud Computing, a Flipboard topic
Which is more secure: the public cloud or on-premises infrastructure? “Is it more secure to run in the cloud or more secure to run in my data…
Sent from my iPad
With thousands of companies using Datadog to track their infrastructure, we can see software trends emerging in real time. Today we’re excited to share what we can see about true Docker adoption—no hype, just the facts.
Fonte: 8 surprising facts about real Docker adoption – Datadog
Conflito de classes: tecnicamente empatados em preço, C180 e CLA200 são as portas de entrada na Mercedes-Benz
Fonte: Classe C x CLA | QuatroRodas
Check out this story on Flipboard!
From Cloud Computing, a Flipboard topic
In February 2015, Google Cloud Platform and 30+ industry leaders and researchers launched PerfKit Benchmarker (PKB). PKB is an open source cloud…
Sent from my iPad
Check out this story on Flipboard!
From Hacker News on Flipboard
You login to a Linux server with a performance issue: what do you check in the first minute? At Netflix we have a massive EC2 Linux cloud, and…
Sent from my iPad
Check out this story on Flipboard!
From Hacker News on Flipboard
Have you ever wondered how Google Photos helps you find all your favorite dog photos? With today’s release of Google Cloud Vision API, developers can…
Sent from my iPad
Check out this story on Flipboard!
From Hacker News on Flipboard
In my experience, one of the highest-impact upgrades you can perform to increase Raspberry Pi performance is to buy the fastest possible microSD…
Sent from my iPad