1. ホーム
  2. Qt

実行中のデータノードが1つあり、この操作で除外されるノードはありません。 エラー

2022-02-19 09:15:12

jspを書いていてこの問題を発見し、久々にBaiduで調べたところ、この方のドキュメントにまともな解説がありました http://blog.sina.com.cn/s/blog_4c248c5801014nd1.html

最初は問題に気づかなかったのですが、ログクエリの結果、Hadoopのデータノードが起動していないことがわかりました。

そこで、/usr/local/hadoop/bin (hadoopのインストールディレクトリ) に行って strart-all.sh を開いてみると、datanodeが起動しないことがわかりました。

そこで、上記のドキュメントを見て、以前、ありふれた方法でntfsを不適切にフォーマットしてしまい、この問題を引き起こしてしまったことに気がつきました。

そこで、hadoopのファイルを見て、htfsからnameファイルとtmpファイルを削除してみました。

hadoopの再フォーマット namenode -format (推奨しません)

フォーマットが機能していることを示す正しいインターフェースが見つかっていないのですが、今回フォーマットができてすべてがスタートしたので、これが正しいフォーマットなのかどうか分かりませんが

正しい書式設定コードは以下の通りです。

DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

15/09/12 23:47:53 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = jhf/127.0.0.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.6.0
STARTUP_MSG: classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/htrace-core-3.0.4.jar:/usr/local/ hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.6.0.jar:/usr/ local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/ usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/ usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local /hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar: /usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5 .23.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1. jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/ local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/ local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.6.0.jar:/ usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/ hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local /hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/ hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/ hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/ hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/ share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/ usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/ usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/ hadoop/share/hadoop/common/lib
15/09/12 23:47:53 INFO namenode.NameNode: createNameNode [-format]
15/09/12 23:47:54 WARN common.Util: Path /usr/local/hadoop/hdfs/name should be specified as a URI in configuration files. configuration.
15/09/12 23:47:54 WARN common.Util: Path /usr/local/hadoop/hdfs/name should be specified as a URI in configuration files. configuration.
Formatting using clusterid: CID-0eee37c4-e688-44e3-83aa-42a785a4ab14
15/09/12 23:47:54 INFO namenode.FSNamesystem: No KeyProvider found.
15/09/12 23:47:54 INFO namenode.FSNamesystem: fsLock is fair:true
15/09/12 23:47:54 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
15/09/12 23:47:54 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
15/09/12 23:47:54 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
15/09/12 23:47:54 INFO blockmanagement.BlockManager: The block deletion will start around 2015 Sep 12 23:47:54
15/09/12 23:47:54 INFO util.GSet: Computing capacity for map BlocksMap
15/09/12 23:47:54 INFO util.GSet: VM type = 64-bit
15/09/12 23:47:54 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
15/09/12 23:47:54 INFO util.GSet: capacity = 2^21 = 2097152 entries
15/09/12 23:47:54 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
15/09/12 23:47:54 INFO blockmanagement.BlockManager: defaultReplication = 1
15/09/12 23:47:54 INFO blockmanagement.BlockManager: maxReplication = 512
15/09/12 23:47:54 INFO blockmanagement.BlockManager: minReplication = 1
15/09/12 23:47:54 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
15/09/12 23:47:54 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
15/09/12 23:47:54 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
15/09/12 23:47:54 INFO blockmanagement.BlockManager: encryptDataTransfer = false
15/09/12 23:47:54 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
15/09/12 23:47:54 INFO namenode.FSNamesystem: fsOwner = jhf (auth:SIMPLE)
15/09/12 23:47:54 INFO namenode.FSNamesystem: supergroup = supergroup
15/09/12 23:47:54 INFO namenode.FSNamesystem: isPermissionEnabled = true
15/09/12 23:47:54 INFO namenode.FSNamesystem: HA Enabled: false
15/09/12 23:47:54 INFO namenode.FSNamesystem: Append Enabled: true
15/09/12 23:47:54 INFO util.GSet: Computing capacity for map INodeMap
15/09/12 23:47:54 INFO util.GSet: VM type = 64-bit
15/09/12 23:47:54 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
15/09/12 23:47:54 INFO util.GSet: capacity = 2^20 = 1048576 entries
15/09/12 23:47:54 INFO namenode.NameNode: Caching file names occuring more than 10 times
15/09/12 23:47:54 INFO util.GSet: Computing capacity for map cachedBlocks
15/09/12 23:47:54 INFO util.GSet: VM type = 64-bit
15/09/12 23:47:54 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
15/09/12 23:47:54 INFO util.GSet: capacity = 2^18 = 262144 entries
15/09/12 23:47:54 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
15/09/12 23:47:54 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
15/09/12 23:47:54 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
15/09/12 23:47:54 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
15/09/12 23:47:54 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
15/09/12 23:47:54 INFO util.GSe





フォーマット後に再度start-all.shを起動します。 

This script is Deprecated. instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
localhost: namenode running as process 3170. stop it first.
localhost: datanode running as process 9602. stop it first.
Starting secondary namenodes [0.0.0.0]
0.0.0.0: secondarynamenode running as process 3669. stop it first.
Starting yarn daemons
localhost: nodemanager running as process 3870. stop it first.
localhost: nodemanager running as process 4105. stop it first.








jpsを見る

3870 ResourceManager
9602 DataNode
4105 NodeManager
3669 SecondaryNameNode
3170 NameNode
2468 org.eclipse.equinox.launcher_1.3.100.v20150511-1540.jar
6474 Bootstrap
9967 Jps


datanodeは正常に起動し、eclipseの中に戻って再びサーバーを再起動すると、エラーが消えました。