1
votes

Getting error when installing App time line server . Please find the below error .

stderr:

Traceback (most recent call last):

File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 89, in

ApplicationTimelineServer().execute()

File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 375, in execute

method(env)

File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 38, in install

self.install_packages(env)

File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 811, in install_packages

name = self.format_package_name(package['name'])

File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 546, in format_package_name

raise Fail("Cannot match package for regexp name {0}. Available packages: {1}".format(name, self.available_packages_in_repos))

resource_management.core.exceptions.Fail: Cannot match package for regexp name hadoop_${stack_version}-yarn. Available packages: ['accumulo', 'accumulo-conf-standalone', 'accumulo-source', 'accumulo_2_6_5_0_292', 'accumulo_2_6_5_0_292-conf-standalone', 'accumulo_2_6_5_0_292-source', 'atlas-metadata', 'atlas-metadata-falcon-plugin', 'atlas-metadata-hive-plugin', 'atlas-metadata-sqoop-plugin', 'atlas-metadata-storm-plugin', 'atlas-metadata_2_6_5_0_292', 'atlas-metadata_2_6_5_0_292-falcon-plugin', 'atlas-metadata_2_6_5_0_292-hive-plugin', 'atlas-metadata_2_6_5_0_292-sqoop-plugin', 'atlas-metadata_2_6_5_0_292-storm-plugin', 'bigtop-tomcat', 'datafu', 'datafu_2_6_5_0_292', 'druid', 'druid_2_6_5_0_292', 'falcon', 'falcon-doc', 'falcon_2_6_5_0_292', 'falcon_2_6_5_0_292-doc', 'flume', 'flume-agent', 'flume_2_6_5_0_292', 'flume_2_6_5_0_292-agent', 'hadoop', 'hadoop-client', 'hadoop-conf-pseudo', 'hadoop-doc', 'hadoop-hdfs', 'hadoop-hdfs-datanode', 'hadoop-hdfs-fuse', 'hadoop-hdfs-journalnode', 'hadoop-hdfs-namenode', 'hadoop-hdfs-secondarynamenode', 'hadoop-hdfs-zkfc', 'hadoop-httpfs', 'hadoop-httpfs-server', 'hadoop-libhdfs', 'hadoop-mapreduce', 'hadoop-mapreduce-historyserver', 'hadoop-source', 'hadoop-yarn', 'hadoop-yarn-nodemanager', 'hadoop-yarn-proxyserver', 'hadoop-yarn-resourcemanager', 'hadoop-yarn-timelineserver', 'hadoop_2_6_5_0_292-client', 'hadoop_2_6_5_0_292-conf-pseudo', 'hadoop_2_6_5_0_292-doc', 'hadoop_2_6_5_0_292-hdfs-datanode', 'hadoop_2_6_5_0_292-hdfs-fuse', 'hadoop_2_6_5_0_292-hdfs-journalnode', 'hadoop_2_6_5_0_292-hdfs-namenode', 'hadoop_2_6_5_0_292-hdfs-secondarynamenode', 'hadoop_2_6_5_0_292-hdfs-zkfc', 'hadoop_2_6_5_0_292-httpfs', 'hadoop_2_6_5_0_292-httpfs-server', 'hadoop_2_6_5_0_292-libhdfs', 'hadoop_2_6_5_0_292-mapreduce-historyserver', 'hadoop_2_6_5_0_292-source', 'hadoop_2_6_5_0_292-yarn-nodemanager', 'hadoop_2_6_5_0_292-yarn-proxyserver', 'hadoop_2_6_5_0_292-yarn-resourcemanager', 'hadoop_2_6_5_0_292-yarn-timelineserver', 'hbase', 'hbase-doc', 'hbase-master', 'hbase-regionserver', 'hbase-rest', 'hbase-thrift', 'hbase-thrift2', 'hbase_2_6_5_0_292', 'hbase_2_6_5_0_292-doc', 'hbase_2_6_5_0_292-master', 'hbase_2_6_5_0_292-regionserver', 'hbase_2_6_5_0_292-rest', 'hbase_2_6_5_0_292-thrift', 'hbase_2_6_5_0_292-thrift2', 'hive', 'hive-hcatalog', 'hive-hcatalog-server', 'hive-jdbc', 'hive-metastore', 'hive-server', 'hive-server2', 'hive-webhcat', 'hive-webhcat-server', 'hive2', 'hive2-jdbc', 'hive2_2_6_5_0_292', 'hive2_2_6_5_0_292-jdbc', 'hive_2_6_5_0_292', 'hive_2_6_5_0_292-hcatalog', 'hive_2_6_5_0_292-hcatalog-server', 'hive_2_6_5_0_292-jdbc', 'hive_2_6_5_0_292-metastore', 'hive_2_6_5_0_292-server', 'hive_2_6_5_0_292-server2', 'hive_2_6_5_0_292-webhcat', 'hive_2_6_5_0_292-webhcat-server', 'hue', 'hue-beeswax', 'hue-common', 'hue-hcatalog', 'hue-oozie', 'hue-pig', 'hue-server', 'kafka', 'kafka_2_6_5_0_292', 'knox', 'knox_2_6_5_0_292', 'livy', 'livy2', 'livy2_2_6_5_0_292', 'livy_2_6_5_0_292', 'mahout', 'mahout-doc', 'mahout_2_6_5_0_292', 'mahout_2_6_5_0_292-doc', 'oozie', 'oozie-client', 'oozie-common', 'oozie-sharelib', 'oozie-sharelib-distcp', 'oozie-sharelib-hcatalog', 'oozie-sharelib-hive', 'oozie-sharelib-hive2', 'oozie-sharelib-mapreduce-streaming', 'oozie-sharelib-pig', 'oozie-sharelib-spark', 'oozie-sharelib-sqoop', 'oozie-webapp', 'oozie_2_6_5_0_292', 'oozie_2_6_5_0_292-client', 'oozie_2_6_5_0_292-common', 'oozie_2_6_5_0_292-sharelib', 'oozie_2_6_5_0_292-sharelib-distcp', 'oozie_2_6_5_0_292-sharelib-hcatalog', 'oozie_2_6_5_0_292-sharelib-hive', 'oozie_2_6_5_0_292-sharelib-hive2', 'oozie_2_6_5_0_292-sharelib-mapreduce-streaming', 'oozie_2_6_5_0_292-sharelib-pig', 'oozie_2_6_5_0_292-sharelib-spark', 'oozie_2_6_5_0_292-sharelib-sqoop', 'oozie_2_6_5_0_292-webapp', 'phoenix', 'phoenix-queryserver', 'phoenix_2_6_5_0_292', 'phoenix_2_6_5_0_292-queryserver', 'pig', 'pig_2_6_5_0_292', 'ranger-admin', 'ranger-atlas-plugin', 'ranger-hbase-plugin', 'ranger-hdfs-plugin', 'ranger-hive-plugin', 'ranger-kafka-plugin', 'ranger-kms', 'ranger-knox-plugin', 'ranger-solr-plugin', 'ranger-storm-plugin', 'ranger-tagsync', 'ranger-usersync', 'ranger-yarn-plugin', 'ranger_2_6_5_0_292-admin', 'ranger_2_6_5_0_292-atlas-plugin', 'ranger_2_6_5_0_292-hbase-plugin', 'ranger_2_6_5_0_292-hive-plugin', 'ranger_2_6_5_0_292-kafka-plugin', 'ranger_2_6_5_0_292-kms', 'ranger_2_6_5_0_292-knox-plugin', 'ranger_2_6_5_0_292-solr-plugin', 'ranger_2_6_5_0_292-storm-plugin', 'ranger_2_6_5_0_292-tagsync', 'ranger_2_6_5_0_292-usersync', 'shc', 'shc_2_6_5_0_292', 'slider', 'slider_2_6_5_0_292', 'spark', 'spark-history-server', 'spark-master', 'spark-python', 'spark-worker', 'spark-yarn-shuffle', 'spark2', 'spark2-history-server', 'spark2-master', 'spark2-python', 'spark2-worker', 'spark2-yarn-shuffle', 'spark2_2_6_5_0_292', 'spark2_2_6_5_0_292-history-server', 'spark2_2_6_5_0_292-master', 'spark2_2_6_5_0_292-python', 'spark2_2_6_5_0_292-worker', 'spark_2_6_5_0_292', 'spark_2_6_5_0_292-history-server', 'spark_2_6_5_0_292-master', 'spark_2_6_5_0_292-python', 'spark_2_6_5_0_292-worker', 'spark_llap', 'spark_llap_2_6_5_0_292', 'sqoop', 'sqoop-metastore', 'sqoop_2_6_5_0_292', 'sqoop_2_6_5_0_292-metastore', 'storm', 'storm-slider-client', 'storm_2_6_5_0_292', 'storm_2_6_5_0_292-slider-client', 'superset', 'superset_2_6_5_0_292', 'tez', 'tez_2_6_5_0_292', 'tez_hive2', 'tez_hive2_2_6_5_0_292', 'zeppelin', 'zeppelin_2_6_5_0_292', 'zookeeper', 'zookeeper-server', 'zookeeper_2_6_5_0_292-server', 'openblas', 'openblas-Rblas', 'openblas-devel', 'openblas-openmp', 'openblas-openmp64', 'openblas-openmp64_', 'openblas-serial64', 'openblas-serial64_', 'openblas-static', 'openblas-threads', 'openblas-threads64', 'openblas-threads64_', 'snappy', 'snappy-devel', 'openblas', 'openblas-Rblas', 'openblas-devel', 'openblas-openmp', 'openblas-openmp64', 'openblas-openmp64_', 'openblas-serial64', 'openblas-serial64_', 'openblas-static', 'openblas-threads', 'openblas-threads64', 'openblas-threads64_', 'snappy', 'snappy-devel', 'accumulo', 'accumulo-conf-standalone', 'accumulo-source', 'accumulo_2_6_5_0_292', 'accumulo_2_6_5_0_292-conf-standalone', 'accumulo_2_6_5_0_292-source', 'atlas-metadata', 'atlas-metadata-falcon-plugin', 'atlas-metadata-hive-plugin', 'atlas-metadata-sqoop-plugin', 'atlas-metadata-storm-plugin', 'atlas-metadata_2_6_5_0_292', 'atlas-metadata_2_6_5_0_292-falcon-plugin', 'atlas-metadata_2_6_5_0_292-hive-plugin', 'atlas-metadata_2_6_5_0_292-sqoop-plugin', 'atlas-metadata_2_6_5_0_292-storm-plugin', 'bigtop-tomcat', 'datafu', 'datafu_2_6_5_0_292', 'druid', 'druid_2_6_5_0_292', 'falcon', 'falcon-doc', 'falcon_2_6_5_0_292', 'falcon_2_6_5_0_292-doc', 'flume', 'flume-agent', 'flume_2_6_5_0_292', 'flume_2_6_5_0_292-agent', 'hadoop', 'hadoop-client', 'hadoop-conf-pseudo', 'hadoop-doc', 'hadoop-hdfs', 'hadoop-hdfs-datanode', 'hadoop-hdfs-fuse', 'hadoop-hdfs-journalnode', 'hadoop-hdfs-namenode', 'hadoop-hdfs-secondarynamenode', 'hadoop-hdfs-zkfc', 'hadoop-httpfs', 'hadoop-httpfs-server', 'hadoop-libhdfs', 'hadoop-mapreduce', 'hadoop-mapreduce-historyserver', 'hadoop-source', 'hadoop-yarn', 'hadoop-yarn-nodemanager', 'hadoop-yarn-proxyserver', 'hadoop-yarn-resourcemanager', 'hadoop-yarn-timelineserver', 'hadoop_2_6_5_0_292-client', 'hadoop_2_6_5_0_292-conf-pseudo', 'hadoop_2_6_5_0_292-doc', 'hadoop_2_6_5_0_292-hdfs-datanode', 'hadoop_2_6_5_0_292-hdfs-fuse', 'hadoop_2_6_5_0_292-hdfs-journalnode', 'hadoop_2_6_5_0_292-hdfs-namenode', 'hadoop_2_6_5_0_292-hdfs-secondarynamenode', 'hadoop_2_6_5_0_292-hdfs-zkfc', 'hadoop_2_6_5_0_292-httpfs', 'hadoop_2_6_5_0_292-httpfs-server', 'hadoop_2_6_5_0_292-libhdfs', 'hadoop_2_6_5_0_292-mapreduce-historyserver', 'hadoop_2_6_5_0_292-source', 'hadoop_2_6_5_0_292-yarn-nodemanager', 'hadoop_2_6_5_0_292-yarn-proxyserver', 'hadoop_2_6_5_0_292-yarn-resourcemanager', 'hadoop_2_6_5_0_292-yarn-timelineserver', 'hbase', 'hbase-doc', 'hbase-master', 'hbase-regionserver', 'hbase-rest', 'hbase-thrift', 'hbase-thrift2', 'hbase_2_6_5_0_292', 'hbase_2_6_5_0_292-doc', 'hbase_2_6_5_0_292-master', 'hbase_2_6_5_0_292-regionserver', 'hbase_2_6_5_0_292-rest', 'hbase_2_6_5_0_292-thrift', 'hbase_2_6_5_0_292-thrift2', 'hive', 'hive-hcatalog', 'hive-hcatalog-server', 'hive-jdbc', 'hive-metastore', 'hive-server', 'hive-server2', 'hive-webhcat', 'hive-webhcat-server', 'hive2', 'hive2-jdbc', 'hive2_2_6_5_0_292', 'hive2_2_6_5_0_292-jdbc', 'hive_2_6_5_0_292', 'hive_2_6_5_0_292-hcatalog', 'hive_2_6_5_0_292-hcatalog-server', 'hive_2_6_5_0_292-jdbc', 'hive_2_6_5_0_292-metastore', 'hive_2_6_5_0_292-server', 'hive_2_6_5_0_292-server2', 'hive_2_6_5_0_292-webhcat', 'hive_2_6_5_0_292-webhcat-server', 'hue', 'hue-beeswax', 'hue-common', 'hue-hcatalog', 'hue-oozie', 'hue-pig', 'hue-server', 'kafka', 'kafka_2_6_5_0_292', 'knox', 'knox_2_6_5_0_292', 'livy', 'livy2', 'livy2_2_6_5_0_292', 'livy_2_6_5_0_292', 'mahout', 'mahout-doc', 'mahout_2_6_5_0_292', 'mahout_2_6_5_0_292-doc', 'oozie', 'oozie-client', 'oozie-common', 'oozie-sharelib', 'oozie-sharelib-distcp', 'oozie-sharelib-hcatalog', 'oozie-sharelib-hive', 'oozie-sharelib-hive2', 'oozie-sharelib-mapreduce-streaming', 'oozie-sharelib-pig', 'oozie-sharelib-spark', 'oozie-sharelib-sqoop', 'oozie-webapp', 'oozie_2_6_5_0_292', 'oozie_2_6_5_0_292-client', 'oozie_2_6_5_0_292-common', 'oozie_2_6_5_0_292-sharelib', 'oozie_2_6_5_0_292-sharelib-distcp', 'oozie_2_6_5_0_292-sharelib-hcatalog', 'oozie_2_6_5_0_292-sharelib-hive', 'oozie_2_6_5_0_292-sharelib-hive2', 'oozie_2_6_5_0_292-sharelib-mapreduce-streaming', 'oozie_2_6_5_0_292-sharelib-pig', 'oozie_2_6_5_0_292-sharelib-spark', 'oozie_2_6_5_0_292-sharelib-sqoop', 'oozie_2_6_5_0_292-webapp', 'phoenix', 'phoenix-queryserver', 'phoenix_2_6_5_0_292', 'phoenix_2_6_5_0_292-queryserver', 'pig', 'pig_2_6_5_0_292', 'ranger-admin', 'ranger-atlas-plugin', 'ranger-hbase-plugin', 'ranger-hdfs-plugin', 'ranger-hive-plugin', 'ranger-kafka-plugin', 'ranger-kms', 'ranger-knox-plugin', 'ranger-solr-plugin', 'ranger-storm-plugin', 'ranger-tagsync', 'ranger-usersync', 'ranger-yarn-plugin', 'ranger_2_6_5_0_292-admin', 'ranger_2_6_5_0_292-atlas-plugin', 'ranger_2_6_5_0_292-hbase-plugin', 'ranger_2_6_5_0_292-hive-plugin', 'ranger_2_6_5_0_292-kafka-plugin', 'ranger_2_6_5_0_292-kms', 'ranger_2_6_5_0_292-knox-plugin', 'ranger_2_6_5_0_292-solr-plugin', 'ranger_2_6_5_0_292-storm-plugin', 'ranger_2_6_5_0_292-tagsync', 'ranger_2_6_5_0_292-usersync', 'shc', 'shc_2_6_5_0_292', 'slider', 'slider_2_6_5_0_292', 'spark', 'spark-history-server', 'spark-master', 'spark-python', 'spark-worker', 'spark-yarn-shuffle', 'spark2', 'spark2-history-server', 'spark2-master', 'spark2-python', 'spark2-worker', 'spark2-yarn-shuffle', 'spark2_2_6_5_0_292', 'spark2_2_6_5_0_292-history-server', 'spark2_2_6_5_0_292-master', 'spark2_2_6_5_0_292-python', 'spark2_2_6_5_0_292-worker', 'spark_2_6_5_0_292', 'spark_2_6_5_0_292-history-server', 'spark_2_6_5_0_292-master', 'spark_2_6_5_0_292-python', 'spark_2_6_5_0_292-worker', 'spark_llap', 'spark_llap_2_6_5_0_292', 'sqoop', 'sqoop-metastore', 'sqoop_2_6_5_0_292', 'sqoop_2_6_5_0_292-metastore', 'storm', 'storm-slider-client', 'storm_2_6_5_0_292', 'storm_2_6_5_0_292-slider-client', 'superset', 'superset_2_6_5_0_292', 'tez', 'tez_2_6_5_0_292', 'tez_hive2', 'tez_hive2_2_6_5_0_292', 'zeppelin', 'zeppelin_2_6_5_0_292', 'zookeeper', 'zookeeper-server', 'zookeeper_2_6_5_0_292-server']

stdout:

2019-02-28 19:11:15,211 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6

2019-02-28 19:11:15,216 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf

2019-02-28 19:11:15,217 - Group['hdfs'] {}

2019-02-28 19:11:15,219 - Group['hadoop'] {}

2019-02-28 19:11:15,219 - Group['users'] {}

2019-02-28 19:11:15,219 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}

2019-02-28 19:11:15,358 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}

2019-02-28 19:11:15,368 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}

2019-02-28 19:11:15,379 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}

2019-02-28 19:11:15,391 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}

2019-02-28 19:11:15,402 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}

2019-02-28 19:11:15,413 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}

2019-02-28 19:11:15,415 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}

2019-02-28 19:11:15,430 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if

2019-02-28 19:11:15,431 - Group['hdfs'] {}

2019-02-28 19:11:15,431 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']}

2019-02-28 19:11:15,442 - FS Type:

2019-02-28 19:11:15,442 - Directory['/etc/hadoop'] {'mode': 0755}

2019-02-28 19:11:15,456 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}

2019-02-28 19:11:15,457 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}

2019-02-28 19:11:15,474 - Repository['HDP-2.6-repo-51'] {'append_to_file': False, 'base_url': 'http://10.66.72.201/HDP/centos7/2.6.5.0-292', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{ {repo_id}}]\nname={ {repo_id}}\n{% if mirror_list %}mirrorlist={ {mirror_list}}{% else %}baseurl={ {base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-51', 'mirror_list': None}

2019-02-28 19:11:15,482 - File['/etc/yum.repos.d/ambari-hdp-51.repo'] {'content': '[HDP-2.6-repo-51]\nname=HDP-2.6-repo-51\nbaseurl=http://10.66.72.201/HDP/centos7/2.6.5.0-292\n\npath=/\nenabled=1\ngpgcheck=0'}

2019-02-28 19:11:15,483 - Writing File['/etc/yum.repos.d/ambari-hdp-51.repo'] because contents don't match

2019-02-28 19:11:15,483 - Repository['HDP-UTILS-1.1.0.21-repo-51'] {'append_to_file': True, 'base_url': 'http://10.66.72.201/HDP-UTILS', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{ {repo_id}}]\nname={ {repo_id}}\n{% if mirror_list %}mirrorlist={ {mirror_list}}{% else %}baseurl={ {base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-51', 'mirror_list': None}

2019-02-28 19:11:15,487 - File['/etc/yum.repos.d/ambari-hdp-51.repo'] {'content': '[HDP-2.6-repo-51]\nname=HDP-2.6-repo-51\nbaseurl=http://10.66.72.201/HDP/centos7/2.6.5.0-292\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.21-repo-51]\nname=HDP-UTILS-1.1.0.21-repo-51\nbaseurl=http://10.66.72.201/HDP-UTILS\n\npath=/\nenabled=1\ngpgcheck=0'}

2019-02-28 19:11:15,487 - Writing File['/etc/yum.repos.d/ambari-hdp-51.repo'] because contents don't match

2019-02-28 19:11:15,491 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}

2019-02-28 19:11:15,840 - Skipping installation of existing package unzip

2019-02-28 19:11:15,841 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}

2019-02-28 19:11:15,860 - Skipping installation of existing package curl

2019-02-28 19:11:15,860 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}

2019-02-28 19:11:15,877 - Skipping installation of existing package hdp-select

2019-02-28 19:11:16,194 - Command repositories: HDP-2.6-repo-51, HDP-UTILS-1.1.0.21-repo-51

2019-02-28 19:11:16,194 - Applicable repositories: HDP-2.6-repo-51, HDP-UTILS-1.1.0.21-repo-51

2019-02-28 19:11:16,196 - Looking for matching packages in the following repositories: HDP-2.6-repo-51, HDP-UTILS-1.1.0.21-repo-51

2019-02-28 19:11:19,536 - Adding fallback repositories: HDP-UTILS-1.1.0.21-repo-5, HDP-2.6-repo-5

2019-02-28 19:11:22,829 - No package found for hadoop_${stack_version}-yarn(hadoop_(\d|_)+-yarn$)

Command failed after 1 tries

Thanks & Regards ,

Prashant Gupta

1

1 Answers

1
votes

This might be solvable with the strategy described here.

i.e. With hadoop_${stack_version}-yarn as your problem package:

yum clean all
yum list installed | grep "hadoop_*-yarn"
yum-complete-transaction 
yum remove hadoop_xxx-yarn

The main explanation can be found from the page I to which I gave the link. Some pre-existing or just installed packages seem to lead to intermittent problems/inconsistencies. With the commands above, the yum DB can be brought into a consistent state and the problem package can be de-installed/removed.

The name of the problem package has to be extracted from this part of the error message:

"resource_management.core.exceptions.Fail: Cannot match package for regexp name hadoop_${stack_version}-yarn."