There is a problem that I can not solve. You need to write logs in Hadup. For this we use TD-AGENT with the WebHDFS plugin. That's just it does not start. Gives an error message:

2018-11-28 20:22:19 +0300 [error]: #0 /usr/sbin/td-agent:7:in `<main>' 2018-11-28 20:22:19 +0300 [error]: #0 unexpected error error_class=RuntimeError error="webhdfs is not available now." 2018-11-28 20:22:19 +0300 [error]: #0 suppressed same stacktrace 2018-11-28 20:22:19 +0300 [info]: Worker 0 finished unexpectedly with status 1 2018-11-28 20:22:20 +0300 [info]: gem 'fluent-mixin-plaintextformatter' version '0.2.6' 2018-11-28 20:22:20 +0300 [info]: gem 'fluent-plugin-elasticsearch' version '2.12.1' 2018-11-28 20:22:20 +0300 [info]: gem 'fluent-plugin-kafka' version '0.6.1' 2018-11-28 20:22:20 +0300 [info]: gem 'fluent-plugin-mongo' version '0.8.1' 2018-11-28 20:22:20 +0300 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '1.5.6' 2018-11-28 20:22:20 +0300 [info]: gem 'fluent-plugin-s3' version '0.8.5' 2018-11-28 20:22:20 +0300 [info]: gem 'fluent-plugin-scribe' version '0.10.14' 2018-11-28 20:22:20 +0300 [info]: gem 'fluent-plugin-td' version '0.10.29' 2018-11-28 20:22:20 +0300 [info]: gem 'fluent-plugin-td-monitoring' version '0.2.3' 2018-11-28 20:22:20 +0300 [info]: gem 'fluent-plugin-webhdfs' version '1.2.3' 2018-11-28 20:22:20 +0300 [info]: gem 'fluent-plugin-webhdfs' version '0.7.1' 2018-11-28 20:22:20 +0300 [info]: gem 'fluentd' version '1.3.0' 2018-11-28 20:22:20 +0300 [info]: gem 'fluentd' version '0.12.40' 2018-11-28 20:22:20 +0300 [info]: adding match pattern="hdfs.*.*" type="webhdfs" 2018-11-28 20:22:20 +0300 [warn]: #0 'flush_interval' is ignored because default 'flush_mode' is not 'interval': 'lazy' 2018-11-28 20:22:20 +0300 [info]: adding source type="http" 2018-11-28 20:22:20 +0300 [info]: adding source type="debug_agent" 2018-11-28 20:22:20 +0300 [info]: #0 starting fluentd worker pid=5286 ppid=59682 worker=0 2018-11-28 20:22:20 +0300 [warn]: #0 webhdfs check request failed. (namenode: hdp85:50070, error: gss_init_sec_context did not return GSS_S_COMPLETE: Unspecified GSS failure. Minor code may provide more information Ticket expired ) 

Here is such a mistake, which is unclear what it means.

There is a suspicion that the whole thing is authorization through Cerberus. But how to solve it, tell me, please.

Judging by the results of Google, only a couple of people came across this.

    1 answer 1

    I don’t know how everyone has it, but personally I did it like this. Need to:

    1. Log in under the user td-agent. And this is an inferior user, and she does not have a shela.
    2. set the environment variable KRB5_CONFIG.
    3. zakinit him

      • sudo -u td-agent bash
      • export KRB5_CONFIG = / etc / krb5.conf
      • kinit -kt /var/lib/td-agent/td-agent.keytab td-agent

    Only after that I started writing Flunch in HDFS.