I am trying to install the open source hadoop or building the HDP from source to be installed by ambari. I can see that it is possible to build the java packages for each component with the documentation available in apache repos, but how can i use those to build rpm/deb packages that are provided by hortonworks for HDP distribution to be installed by ambari.
1 Answers
@ShivamKhandelwal Building Ambari From Source is a challenge but one that can be accomplished with some persistence. In this post I have disclosed the commands I used recently to build Ambari 2.7.5 in centos:
Ambari 2.7.5 installation failure on CentOS 7
"Building HDP From Source" is very big task as it requires building each component separately, creating your own public/private repo which contains all the component repos or rpms for each operating system flavor. This is a monumental task which was previously managed by many employees and component contributors at Hortonworks.
When you install Ambari from HDP, it comes out of the box with their repos including their HDP stack (HDFS,Yarn,MR,Hive, etc). When you install Ambari From Source, there is no stack. The only solution is to Build Your Own Stack which is something I am expert at doing.
I am currently building a DDP stack as an example to share with the public. I started this project by reverse engineering a HDF Management Pack which includes stack structure (files/folders) to role out NiFi, Kafka, Zookeeper, and more. I have customized it to be my own stack with my own services and components (NiFi, Hue, Elasticsearch, etc).
My goal with DDP is to eventually make my own repos for the Components and Services I want, with the versions I want to install in my cluster. Next I will copy some HDP Components like HDFS,YARN,HIVE from the HDP stack directly into my DDP stack using the last free public HDP Stack (HDP 3.1.5).