.修改为阿里镜像源

1.1备份当前的yum源配置

sudo cp /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

1.2下载新的CentOS-Base.repo 到/etc/yum.repos.d/
sudo wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

1.3清除缓存并生成新的缓存

sudo yum clean all

sudo yum makecache

1.4 更新

yum update

2.安装docker

2.1

sudo yum install -y yum-utils

2.2 添加docker镜像源

 yum-config-manager –add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

1.3 安装

sudo yum install docker-ce docker-ce-cli containerd.io

1.4 启用

sudo systemctl start docker

2. 安装

3 .安装paddle

3.1 拉取预安装 PaddlePaddle 的镜像

docker pull registry.baidubce.com/paddlepaddle/paddle:2.5.2

3.2 用镜像构建并进入Docker容器

docker run -p 9292:9292 –name paddle -dit registry.baidubce.com/paddlepaddle/paddle:2.5.2

3.3 启动容器

sudo systemctl start docker

3.3 进入容器

docker start paddle

docker exec -it paddle /bin/bash

 

3.4 在容器内执行

3.4.1 PaddleNLP PaddleOCR安装

 # 升级 pip

pip install -U pip -i https://pypi.tuna.tsinghua.edu.cn/simple

# 容器中已经包含了 paddlepaddle 2.6.1

pip list

# 安装 PaddleNLP

pip install paddlenlp -i https://pypi.tuna.tsinghua.edu.cn/simple

# 安装 PaddleOCR

pip install paddleocr -i https://pypi.tuna.tsinghua.edu.cn/simple

 

3.4.2准备PaddleServing的运行环境(版本地址https://github.com/PaddlePaddle/Serving/blob/develop/doc/Latest_Packages_CN.md)

 

安装客户端

pip install paddle-serving-client -i https://pypi.tuna.tsinghua.edu.cn/simple

安装服务器端

pip install paddle-serving-server -i https://pypi.tuna.tsinghua.edu.cn/simple

安装工具组件

pip install paddle-serving-app -i https://pypi.tuna.tsinghua.edu.cn/simple

 

3.4.3 模型转换

检测模型转换完成后,会在当前文件夹多出ppocr_det_v3_serving 和ppocr_det_v3_client的文件夹

# 下载并解压 OCR 文本检测模型

wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar -O ch_PP-OCRv3_det_infer.tar && tar -xf ch_PP-OCRv3_det_infer.tar

# 下载并解压 OCR 文本识别模型

wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar -O ch_PP-OCRv3_rec_infer.tar &&  tar -xf ch_PP-OCRv3_rec_infer.tar

 

# 用安装的paddle_serving_client把下载的inference模型转换成易于server部署的模型格式。

# 转换检测模型

python -m paddle_serving_client.convert –dirname ./ch_PP-OCRv3_det_infer/ \

                                         –model_filename inference.pdmodel          \

                                         –params_filename inference.pdiparams       \

                                         –serving_server ./ppocr_det_v3_serving/ \

                                         –serving_client ./ppocr_det_v3_client/

# 转换识别模型

python -m paddle_serving_client.convert –dirname ./ch_PP-OCRv3_rec_infer/ \

                                         –model_filename inference.pdmodel          \

                                         –params_filename inference.pdiparams       \

                                         –serving_server ./ppocr_rec_v3_serving/  \

                                         –serving_client ./ppocr_rec_v3_client/

 

 

3.4.4 Paddle Serving pipeline部署

注意: PaddleOCR/deploy/pdserving/config.yml文件中的两个model_config字段,对应模型转换的文件夹。

pdserver目录包含启动pipeline服务和发送预测请求的代码,包括:

 

nohup python web_service.py –config=config.yml &>log.txt &

tail -f ./log.txt

注意:

宿主机 复杂文件到docker

docker cp PaddleOCR-main.zip paddle:/home

docker cp PaddleNLP-2.8.1.zip paddle:/home

 

ubuntu安装 zip和unzip

apt install zip unzip

 

安装tree

apt install tree

tree -h *_client *_serving