Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added fastapi server and Docker files. #41

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
FROM python:slim-bullseye
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FROM python:3.10-slim-bullseye

pytorchvision cannot be installed if the version of python is > 3.10

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm using this Docker image myself, and it works quite fine. I will try your change, when I have the time, but could you tell me, what the difference is?


WORKDIR /app
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y git build-essential python3-setuptools
RUN git clone https://github.com/Stability-AI/stable-fast-3d.git
WORKDIR /app/stable-fast-3d
COPY server.py .
COPY env.server .env
COPY requirements.txt .
RUN mkdir model
RUN rm __init__.py
RUN python -m pip install torch torchvision torchaudio setuptools==69.5.1 wheel
RUN python -m pip install -r requirements.txt
CMD ["fastapi", "run", "/app/stable-fast-3d/server.py", "--port", "8000"]
12 changes: 12 additions & 0 deletions docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
version: '3.4'

services:
stablefast3d:
image: stablefast3d
ports:
- 8100:8000
build:
context: .
dockerfile: ./Dockerfile
volumes:
- ./model:/app/stable-fast-3d/model
7 changes: 7 additions & 0 deletions env.server
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
DEVICE=cpu
PRETRAINED_MODEL=model
FOREGROUND_RATIO=0.85
TEXTURE_RESOLUTION=1024
REMESSH_OPTION=none
TARGET_VERTEX_COUNT=-1
BATCH_SIZE=1
3 changes: 2 additions & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ huggingface-hub==0.23.4
rembg[gpu]==2.0.57; sys_platform != 'darwin'
rembg==2.0.57; sys_platform == 'darwin'
git+https://github.com/vork/[email protected]
gpytoolbox==0.2.0
gpytoolbox==0.3.2
fastapi[standard]>=0.112.2
./texture_baker/
./uv_unwrapper/
72 changes: 72 additions & 0 deletions server.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
import base64
import os
import io
from io import BytesIO
from contextlib import nullcontext
from dotenv import load_dotenv
from fastapi import FastAPI, UploadFile
from fastapi.responses import Response
from PIL import Image
import rembg
import torch
from sf3d.system import SF3D
from sf3d.utils import get_device, remove_background, resize_foreground

load_dotenv()

app = FastAPI()

model = SF3D.from_pretrained(
os.getenv("PRETRAINED_MODEL"),
config_name="config.yaml",
weight_name="model.safetensors",
)
model.to(os.getenv("DEVICE"))
model.eval()

rembg_session = rembg.new_session()

@app.post("/generate",
responses = { 200: { "content": { "model/gltf-binary": {} } } },
response_class=Response
)
async def generate(file: UploadFile):
# load the image using Pillow
image = Image.open(file.file)
image = remove_bg(image)
mesh = generate_model(image)

# return the image as a binary stream with a suitable content-disposition header for download
return Response(
content=mesh,
media_type="model/gltf-binary",
headers={"Content-Disposition": "attachment; filename=mesh.glb"},
)

def remove_bg(img: Image) -> Image:
img = remove_background(img, rembg_session)
img = resize_foreground(img, float(os.getenv("FOREGROUND_RATIO")))
return img

def generate_model(image: Image):
device = os.getenv("DEVICE")
if torch.cuda.is_available():
torch.cuda.reset_peak_memory_stats()
with torch.no_grad():
with torch.autocast(
device_type=device, dtype=torch.float16
) if "cuda" in device else nullcontext():
mesh, glob_dict = model.run_image(
image,
bake_resolution=int(os.getenv("TEXTURE_RESOLUTION")),
remesh=os.getenv("REMESSH_OPTION"),
vertex_count=int(os.getenv("TARGET_VERTEX_COUNT")),
)
if torch.cuda.is_available():
print("Peak Memory:", torch.cuda.max_memory_allocated() / 1024 / 1024, "MB")
elif torch.backends.mps.is_available():
print(
"Peak Memory:", torch.mps.driver_allocated_memory() / 1024 / 1024, "MB"
)

return mesh.export(include_normals=True, file_type='glb')