Leadership - yet another commonly used but poorly understood concept despite the vast amount of literature available on this topic. The one definition that I follow is what Partha Ghosh , visiting lecturer at MIT taught me - True leaders are the servants of a cause. There cannot be true leadership without a worthwhile cause. Once you dedicate yourself to a worthwhile cause , leadership pivots around "enlightenment" , not "entitlement" . True leaders are often silent - they bring about transformation without much fanfare or hoopla .
Vinay's musings
The Systems Blog
Sunday, February 03, 2013
Sunday, March 04, 2012
Espoused process vs Process in use
Managers beware . Processes that are espoused may not be the processes in use.
Several academics have mentioned this phenomenon. Simply put, what is actually practised may be different from what is believed to be getting practised. Most common cause of this is processes evolve since their initial rollout but the evolution is rarely documented. Structural changes such as reorgs , attrition , new hiring also contribute to the widening of the gap. Nothing beats experience - going through the process hands on is the best in order to understand what really is going on . Next best is first hand observation of the process in use.
Several academics have mentioned this phenomenon. Simply put, what is actually practised may be different from what is believed to be getting practised. Most common cause of this is processes evolve since their initial rollout but the evolution is rarely documented. Structural changes such as reorgs , attrition , new hiring also contribute to the widening of the gap. Nothing beats experience - going through the process hands on is the best in order to understand what really is going on . Next best is first hand observation of the process in use.
Saturday, December 04, 2010
Predictive Analytics
Predictive analytics is hot. Advances in hardware,statistics and business intelligence software have made it usable and performant. Predictive analytics , as the name suggests helps gain business intelligence from data using various data mining, pattern recognition and probablistic algorithms. Consider the following example - Given the history of orders for a product , how likely is a customer belonging to a certain age group to buy a certain product ? The answer can be computed in several ways.
1.Find out clusters of customers who bought the product. Find out the distance of the cluster from this customer to indicate the likelihood of buying.
2.Use regression analysis. Y = a1*X1 + a2*X2 .....
where Y represents revenue for a product and X1,X2,X3 could represent causal factors such as age,geography etc.a1,a2,a3 are the coefficients. If age is statistically signigicant, the coefficient will have a significant value. Once age is confirmed to be statistically significant, we could have multiple causal variables
for each age bracket and then find out which among those is the most significant.
Oracle offers predictive analytics at several layers. PL/SQL/JAVA comes with an API for predictive analytics called DBMS_PREDICTIVE_ANALYTICS . Oracle also has a product called RTD - Real time decision making that is bundled with OBIEE - Oracle business intelligence enterprise edition.
Other tools like crystall ball, excel add on for predictive analytics and Oracle data mining are some other tools in Oracle's arsenal . Oracle Demantra provide predictive analytics related to forecasting and demand management. With IBM having acquired SPSS , the industry's landscape has become interesting.
1.Find out clusters of customers who bought the product. Find out the distance of the cluster from this customer to indicate the likelihood of buying.
2.Use regression analysis. Y = a1*X1 + a2*X2 .....
where Y represents revenue for a product and X1,X2,X3 could represent causal factors such as age,geography etc.a1,a2,a3 are the coefficients. If age is statistically signigicant, the coefficient will have a significant value. Once age is confirmed to be statistically significant, we could have multiple causal variables
for each age bracket and then find out which among those is the most significant.
Oracle offers predictive analytics at several layers. PL/SQL/JAVA comes with an API for predictive analytics called DBMS_PREDICTIVE_ANALYTICS . Oracle also has a product called RTD - Real time decision making that is bundled with OBIEE - Oracle business intelligence enterprise edition.
Other tools like crystall ball, excel add on for predictive analytics and Oracle data mining are some other tools in Oracle's arsenal . Oracle Demantra provide predictive analytics related to forecasting and demand management. With IBM having acquired SPSS , the industry's landscape has become interesting.
Academic publications like "Competing on Analytics" by Harvard Press and the recent survey published in
MIT's Sloan Management Review on BI trends have contributed to the hieghtened interest and investment in this upcoming discipline. Companies like Netflix have grown to a billion dollar by predicting consumer buying patterns based on "clicks".
Friday, October 08, 2010
Correlation vs Causality
I was in Chennai a few years ago at a conference sitting atop the terrace of a restaurant with a few colleagues. The sales of cold drinks were at an all time high with everybody ordering coke ,beer etc.
At the same time, I noticed that there was a huge influx of patients at the hospital near the restaurant .
So there must have been a positive correlation between the data for the sale of cold drinks and the data for the inflow of patients to the hospital , meaning as one went up or down the other went up or down too.
However does it mean that the cold drinks caused people to get hospitalized ? Or vice versa - did people
drink because someone got hospitalized ?
None of the above was entirely correct in this situation. The reality was , the influx of people to the hospital and the sales of cold drinks were caused by the sweltering heat of Chennai.. So if we were to forecast the sale of cold drinks , the causal factor would be "temperature" and not the "number of people admitted to the nearby hospital" .
And this precisely is one of the key things to watch out for in analyzing the results of regression analysis. While regression will give you correlation between 2 variables , it may require an expert to confirm if there is causality between the two.
At the same time, I noticed that there was a huge influx of patients at the hospital near the restaurant .
So there must have been a positive correlation between the data for the sale of cold drinks and the data for the inflow of patients to the hospital , meaning as one went up or down the other went up or down too.
However does it mean that the cold drinks caused people to get hospitalized ? Or vice versa - did people
drink because someone got hospitalized ?
None of the above was entirely correct in this situation. The reality was , the influx of people to the hospital and the sales of cold drinks were caused by the sweltering heat of Chennai.. So if we were to forecast the sale of cold drinks , the causal factor would be "temperature" and not the "number of people admitted to the nearby hospital" .
And this precisely is one of the key things to watch out for in analyzing the results of regression analysis. While regression will give you correlation between 2 variables , it may require an expert to confirm if there is causality between the two.
Wednesday, October 06, 2010
The dangers of serial thinking
I bought a horse for 10$ and sold it to a friend for 20$. I bought the same horse from the same person for 30$ and sold it to the same person for 40$ . What was my profit ?
If you came up with 10$ as the answer (20-10) + (20-30) + (40-30) , you fell into the trap of serial thinking.
If you came up with 20$ as the answer , you are right because these two are separate transactions.
Transaction 1 - You bought and sold the horse. Profit = 20 - 10 = 10$
Transaction 2 - You bought and sold the horse. Profit = 40-30 = 10$
Total profit = 10 + 10 = 20$
If you are financially savvy , you would do
profit = total cash inflow - total cash outflow
= (40 + 20) - ( 30 + 10)
= 60 - 40
= 20$
Beware the dangers of serial thinking .
Start thinking laterally.
If you came up with 10$ as the answer (20-10) + (20-30) + (40-30) , you fell into the trap of serial thinking.
If you came up with 20$ as the answer , you are right because these two are separate transactions.
Transaction 1 - You bought and sold the horse. Profit = 20 - 10 = 10$
Transaction 2 - You bought and sold the horse. Profit = 40-30 = 10$
Total profit = 10 + 10 = 20$
If you are financially savvy , you would do
profit = total cash inflow - total cash outflow
= (40 + 20) - ( 30 + 10)
= 60 - 40
= 20$
Beware the dangers of serial thinking .
Start thinking laterally.
Tuesday, September 28, 2010
The three best practices in software development
Michael Cusumano has documented the three most effective practices in software development in his
book "The Business of Software". These three practices have resulted in significant defect reductions in large companies.While Six Sigma, Lean, CMMI and other methodologies can be applied to software and have been effective with varying degrees , the following three practices have to be implemented. All other methodologies and practices would be add-ons. So, what are the three key practices ?.
1.Early prototyping -
Do a proof of concept by prototyping early and show it to the users.
Seeing is believing. When the users see something, they can spot obvious flaws and limitations.
2.Reviews,reviews,reviews
We need reviews at each stage - High level design reviews ,detailed design review, code reviews, unit test cases reviews, system test reviews , project plan reviews etc. Review provides a negative feedback mechanism and hence stabilizes the output, to give an analogy of a closed look negative feedback control system. In a control system with a negative feedback, the output Y = X * (G/1 + GH) .
X is the input, G is the gain/amplification/distortion and H is the amount of feedback
In a large project , G can be assumed to be large and H close to 1 . Hence GH can be assumed to be much greater
than 1. Thus Y = X * (G/GH) = X/H
Now if the review is 100% perfect , H =1 . Therefore, Y =X .
Thus the output of the software stage equals the inputs which is the specification to that stage.
In essence the output of the software development will follow the specifications accurately if there are
rigorous reviews.
Linus Torvalds , the inventor of Linux said " Every bug appears shallow if a large number of eyes look at the code". How true ! No wonder why Linux is such a robust operating system despite being open source.
Millions of eyes look at each line of code each day and suggest improvements. So the next time you want to review code, call all developers in a room and project their code . Let them all look at it and have fun dissecting it in a cordial and friendly environment.
3.Daily regression tests and builds.
The code must be compiled and linked daily.
Automated regression tests must be run daily - the more the better and emails send to all concerned including
senior management when RT's fail. This ensures that as new features are added, existing features continue to work.
Sunday, September 26, 2010
Bayes theorem
Bayes theorem is the foundation of Bayesian statistics and the new and emerging discipline of predictive analytics. Reverend Bayes , a 17th century British mathematician wrote the Bayes theorem as a special case of probability theory. He did not disclose the theorem fearing that it might not pass rigorous scientific scruitiny. The theorem was recovered after his death .
Here's the theorem and a few practical applications.
P(Ri/E) = p(E/Ri) p(Ri) / sum [p(E/Ri) * p(Ri)] i = 1 to n
p(Ri) is called anterior probability of event Ri. It represents what we already know from past history about event Ri..E is the fresh new evidence that has arrived that would influence the event Ri.
p(Ri/E) is the probability of event Ri given the evidence E.
p(E/Ri) is the likelihood of the evidence itself.
As can be seen above as new evidence surfaces, the theorem lets us update our knowledge about the probability of occurrence of the event Ri. p(Ri) was our knowledge of the event based on its history,
while p(Ri/E) is our updated knowledge after taking the evidence E into consideration.
These probabilities can be deterministic or may represent a certain distribution.
Here are a few practical examples where the theroem could be applied.
(An example similar to the first example below was cited in a recent issue of Sloan Management Review).
1.I know the history of rainfall in my region for the past 10 years. I now have the evidence that this year
the temperatures were higher than normal. We know that higher temperatures correspond to higher rainfall
and follows a certain distribution. Using this evidence, we can predict the probability of rains this year .
If I calculated probability based on history alone (which is what frequentists would do) , I would be ignorning the key evidence that surfaced this year.
2. We know the history of deliveries of a certain supplier. A new evidence has arrived that the supplier's capacity is full due to a new contract they have signed with some customer. We know the relationship between the supplier's capacity and on time delivery . Given the evidence, we can find out the probability of on time delivery this month. If we had not used Bayes theorem, (and instead relied on history alone as a frequentist would do) the probability would have been based solely on the supplier's historical performance and hence would have ignored the critical new evidence.
Bayesian theorem does have certain limitations/gotchas in practical life as would be seen in future blogs.
Here's the theorem and a few practical applications.
P(Ri/E) = p(E/Ri) p(Ri) / sum [p(E/Ri) * p(Ri)] i = 1 to n
p(Ri) is called anterior probability of event Ri. It represents what we already know from past history about event Ri..E is the fresh new evidence that has arrived that would influence the event Ri.
p(Ri/E) is the probability of event Ri given the evidence E.
p(E/Ri) is the likelihood of the evidence itself.
As can be seen above as new evidence surfaces, the theorem lets us update our knowledge about the probability of occurrence of the event Ri. p(Ri) was our knowledge of the event based on its history,
while p(Ri/E) is our updated knowledge after taking the evidence E into consideration.
These probabilities can be deterministic or may represent a certain distribution.
Here are a few practical examples where the theroem could be applied.
(An example similar to the first example below was cited in a recent issue of Sloan Management Review).
1.I know the history of rainfall in my region for the past 10 years. I now have the evidence that this year
the temperatures were higher than normal. We know that higher temperatures correspond to higher rainfall
and follows a certain distribution. Using this evidence, we can predict the probability of rains this year .
If I calculated probability based on history alone (which is what frequentists would do) , I would be ignorning the key evidence that surfaced this year.
2. We know the history of deliveries of a certain supplier. A new evidence has arrived that the supplier's capacity is full due to a new contract they have signed with some customer. We know the relationship between the supplier's capacity and on time delivery . Given the evidence, we can find out the probability of on time delivery this month. If we had not used Bayes theorem, (and instead relied on history alone as a frequentist would do) the probability would have been based solely on the supplier's historical performance and hence would have ignored the critical new evidence.
Bayesian theorem does have certain limitations/gotchas in practical life as would be seen in future blogs.
Subscribe to:
Posts (Atom)