I tried to create a folder in the HDFS file system using the command

 ./hadoop fs -mkdir /user/hadoop 

and as a result received the following messages

 13/02/17 09:45:50 INFO ipc.Client: Retrying connect to server: one/192.168.1.8:9000. Already tried 0 time(s). 13/02/17 09:45:51 INFO ipc.Client: Retrying connect to server: one/192.168.1.8:9000. Already tried 1 time(s). 13/02/17 09:45:52 INFO ipc.Client: Retrying connect to server: one/192.168.1.8:9000. Already tried 2 time(s). 13/02/17 09:45:53 INFO ipc.Client: Retrying connect to server: one/192.168.1.8:9000. Already tried 3 time(s). 13/02/17 09:45:54 INFO ipc.Client: Retrying connect to server: one/192.168.1.8:9000. Already tried 4 time(s). 13/02/17 09:45:55 INFO ipc.Client: Retrying connect to server: one/192.168.1.8:9000. Already tried 5 time(s). 13/02/17 09:45:56 INFO ipc.Client: Retrying connect to server: one/192.168.1.8:9000. Already tried 6 time(s). 13/02/17 09:45:57 INFO ipc.Client: Retrying connect to server: one/192.168.1.8:9000. Already tried 7 time(s). 13/02/17 09:45:58 INFO ipc.Client: Retrying connect to server: one/192.168.1.8:9000. Already tried 8 time(s). 13/02/17 09:45:59 INFO ipc.Client: Retrying connect to server: one/192.168.1.8:9000. Already tried 9 time(s). Bad connection to FS. command aborted. exception: Call to one/192.168.1.8:9000 failed on connection exception: java.net.ConnectException: Connection refused 

The following is specified in the /export/hadoop-1.0.1/conf/core-site.xml file:

 <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>fs.default.name</name> <value> hdfs://192.168.1.8:9000 </value> </property> </configuration> 

I wanted to clarify in connection with the messages, did I specify the correct port? If not, tell me what you need?

    1 answer 1

    The correct port is the one that:

    • fits into the type integer;
    • not busy with another process;
    • value is not reserved for OS needs.

    As for the specific use in our case, the following problems are possible:

    • NameNode on host 192.168.1.8 and port 9000 is not running;
    • the remote host 192.168.1.8 is not available for the host on which the operation is being performed (check with the ping command);
    • remote host 192.168.1.8 is available, but the specified port is closed on this host (check with telnet);
    • host resolving error occurs (hadoop is sensitive to host settings).
    • @a_gura 192.168.1.8 - this is the computer on which I am trying to create a folder in hdfs and of course it pings. telnet 192.168.1.8 9000 issued Trying 192.168.1.8 ... telnet: Unable to connect to remote host: Connection refused I do not know about resolving (because I don’t know what it is). - ivan89
    • @ ivan31 If you cannot connect using telnet, then the corresponding service (most likely NameNode) is not running on this port. See the logs of hadoop and its entire zoo. - a_gura
    • @a_gura how to output the result of the ./hadoop namenode command to a file? ./hadoop namenode &> /tmp/startnamenode.txt creates the specified file, but does not record the execution results there, but outputs them to the console, but they obviously contain an error judging by the passages I saw - ivan89
    • @ ivan31 Why do you need to run namenode? At the start hadoop several processes are started (including namnode). At the same time, each of them writes its own log. All logs are usually in the same directory. The paths to the logs should be specified in the configuration files. - a_gura