Lean Startup Book by Eric Ries

Recently, I read the Lean Startup book written by Eric Ries and I learned a lot of about how to build a technology startup today.

Then, I extracted the fundamental ideas of Lean Startup philosophy. Below, I already answered the first question. I am waiting for the answers and comments from you…


1- What is a Build-Measure-Learn Cycle?

Build: Initially, you need to start by building a minimum viable product (MVP). It is the essential part of learning whether the main idea of a startup is promising or not. You do not write any line of code for building an MVP. You will create the most basic environment to measure the behavior of your potential customers.

Measure: In order to understand the interest of your customers, you need to use some analytical tools, the cohort analysis is the most important component of measuring the user behavior. But the most important one is to directly contact the customers who experienced your MVP. You need to organize focus sessions with those people. If you don’t have any customer to measure, then try Google Ads to have a limited user base.

Learn:  Now it is time to decide to pivot or persevere. Pivoting means as transforming your initial idea into what customers want. If your customers find your MVP valuable and they commit to using it, then you don’t need to pivot, which means you will persevere your MVP by adding additional features to your first product.

2- What is the meaning of Leaps-Of-Faith hypothesis? 

3- How do you decide whether a startup is on the right way or not?

4- What is the difference between successful and unsuccessful entrepreneurs?

5- How do you measure a value of a network?

MapReduce vs Spark (!)

People most generally make a mistake by comparing MapReduce with Spark.

Actually, MapReduce is a programming paradigm, so we cannot compare MapReduce with Spark. But we can compare how Hadoop uses MapReduce and Spark uses MapReduce.

In Hadoop MapReduce, each job has one Map and one Reduce phase; but in Spark MapReduce, the Map and Reduce phases can be made together. Secondly, while in Hadoop MapReduce the output of jobs is written as a file, Spark writes them to the memory. As a result, it accelerates the overall execution time of the master job.