I tried to run namenode using a script

 ./start-all.sh 

and found the following messages in the log file (I’m quoting messages for handler only)

 2013-02-18 02:36:18,267 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = one/192.168.1.8 STARTUP_MSG: args = [] STARTUP_MSG: version = 1.0.1 STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1243785; compiled by 'hortonfo' on Tue Feb 14 08:15:38 UTC 2012 ************************************************************/ 2013-02-18 02:36:18,489 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2013-02-18 02:36:18,505 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered. 2013-02-18 02:36:18,506 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 2013-02-18 02:36:18,506 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started 2013-02-18 02:36:18,715 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered. 2013-02-18 02:36:18,723 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists! 2013-02-18 02:36:18,729 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered. 2013-02-18 02:36:18,731 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered. 2013-02-18 02:36:18,753 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 32-bit 2013-02-18 02:36:18,756 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB 2013-02-18 02:36:18,756 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^22 = 4194304 entries 2013-02-18 02:36:18,756 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304 2013-02-18 02:36:18,848 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop 2013-02-18 02:36:18,848 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup 2013-02-18 02:36:18,848 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true 2013-02-18 02:36:18,853 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100 2013-02-18 02:36:18,854 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) 2013-02-18 02:36:18,957 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean 2013-02-18 02:36:18,990 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times 2013-02-18 02:36:19,003 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1 2013-02-18 02:36:19,007 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0 2013-02-18 02:36:19,007 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112 loaded in 0 seconds. 2013-02-18 02:36:19,013 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /tmp/hadoop-hadoop/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds. 2013-02-18 02:36:19,014 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112 saved in 0 seconds. 2013-02-18 02:36:19,023 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112 saved in 0 seconds. 2013-02-18 02:36:19,031 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups 2013-02-18 02:36:19,031 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 194 msecs 2013-02-18 02:36:19,053 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0 2013-02-18 02:36:19,053 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0 2013-02-18 02:36:19,054 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0 2013-02-18 02:36:19,054 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of over-replicated blocks = 0 2013-02-18 02:36:19,054 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode termination scan for invalid, over- and under-replicated blocks completed in 16 msec 2013-02-18 02:36:19,054 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs. 2013-02-18 02:36:19,054 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes 2013-02-18 02:36:19,054 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks 2013-02-18 02:36:19,062 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list 2013-02-18 02:36:19,068 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered. 2013-02-18 02:36:19,085 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort7000 registered. 2013-02-18 02:36:19,086 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort7000 registered. 2013-02-18 02:36:19,088 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: one/192.168.1.8:7000 2013-02-18 02:36:19,094 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 2 msec 2013-02-18 02:36:19,094 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 2 msec processing time, 2 msec clock time, 1 cycles 2013-02-18 02:36:19,094 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec 2013-02-18 02:36:19,094 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles 2013-02-18 02:36:19,095 INFO org.apache.hadoop.ipc.Server: Starting SocketReader 2013-02-18 02:36:49,141 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2013-02-18 02:36:49,231 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) 2013-02-18 02:36:49,246 INFO org.apache.hadoop.http.HttpServer: dfs.webhdfs.enabled = false 2013-02-18 02:36:49,256 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070 2013-02-18 02:36:49,257 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070 2013-02-18 02:36:49,257 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070 2013-02-18 02:36:49,257 INFO org.mortbay.log: jetty-6.1.26 2013-02-18 02:36:49,553 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50070 2013-02-18 02:36:49,553 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070 2013-02-18 02:36:49,623 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 7000: starting 2013-02-18 02:36:49,624 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 7000: starting 2013-02-18 02:36:49,624 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 7000: starting 2013-02-18 02:36:49,624 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 7000: starting 2013-02-18 02:36:49,624 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 7000: starting 2013-02-18 02:36:49,624 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 7000: starting 2013-02-18 02:36:49,624 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 7000: starting 2013-02-18 02:36:49,624 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 7000: starting 2013-02-18 02:36:49,624 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 7000: starting 2013-02-18 02:36:49,625 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 7000: starting 2013-02-18 02:36:49,632 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 7000: starting 2013-02-18 02:36:49,637 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2013-02-18 02:43:14,155 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0 2013-02-18 02:43:14,279 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.io.IOException: File /tmp/hadoop-hadoop/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1 2013-02-18 02:43:14,281 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 7000, call addBlock(/tmp/hadoop-hadoop/mapred/system/jobtracker.info, DFSClient_-273997685, null) from 192.168.1.8:62962: error: java.io.IOException: File /tmp/hadoop-hadoop/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1 java.io.IOException: File /tmp/hadoop-hadoop/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1556) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382) 

What needs to be done to properly perform? PS In general, the content in the hdfs-site.xml configuration file is the following (there is no coefficient 1)

 <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.replication</name> <value>2</value> </property> </configuration> 
  • 2013-02-18 02: 43: 14,279 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as: hadoop cause: java.io.IOException: File /tmp/hadoop-hadoop/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1 Looks like a problem with file system access rights. From what user hadoop and its processes run? Does this user have read / write access to the relevant file system directories? - a_gura
  • @a_gura hadoop launched from a hadoop user. And the path specified in the error does not exist at all on the node on which I ran the script, and the owner of the /tmp/hadoop-hadoop/ folder (in which there is only the dfs folder) has the user hadoop and the rights to it rwxrwxrwx (0777). - ivan89
  • @ ivan31 "And the path specified in the error does not exist at all on that node" Maybe he is just trying to create these directories and a file. - a_gura
  • @a_gura is a WinSCP program that reflects the file system of the specified host, in my case, the virtual machine on which Linux is installed. In it, in the properties of the specified folders, the rights rwxrwxrwx (0777) are displayed. - ivan89

0