2016-09-16 62 views
4

我正在使用Bitbuckets流水线。我希望它将我的回购(非常小)的全部内容推送到S3。我不想将它压缩,推到S3然后解压缩。我只是希望它将我现有的文件/文件夹结构放在我的Bitbucket仓库中,并将其推送到S3。使用Bitbucket流水线将整个Bitbucket回购上传到S3

yaml文件和.py文件应该如何完成此操作?

这是当前YAML文件:

image: python:3.5.1 

pipelines: 
    branches: 
    master: 
     - step: 
      script: 
      # - apt-get update # required to install zip 
      # - apt-get install -y zip # required if you want to zip repository objects 
      - pip install boto3==1.3.0 # required for s3_upload.py 
      # the first argument is the name of the existing S3 bucket to upload the artefact to 
      # the second argument is the artefact to be uploaded 
      # the third argument is the the bucket key 
      # html files 
      - python s3_upload.py my-bucket-name html/index_template.html html/index_template.html # run the deployment script 
      # Example command line parameters. Replace with your values 
      #- python s3_upload.py bb-s3-upload SampleApp_Linux.zip SampleApp_Linux # run the deployment script 

这里是我当前的Python:

from __future__ import print_function 
import os 
import sys 
import argparse 
import boto3 
from botocore.exceptions import ClientError 

def upload_to_s3(bucket, artefact, bucket_key): 
    """ 
    Uploads an artefact to Amazon S3 
    """ 
    try: 
     client = boto3.client('s3') 
    except ClientError as err: 
     print("Failed to create boto3 client.\n" + str(err)) 
     return False 
    try: 
     client.put_object(
      Body=open(artefact, 'rb'), 
      Bucket=bucket, 
      Key=bucket_key 
     ) 
    except ClientError as err: 
     print("Failed to upload artefact to S3.\n" + str(err)) 
     return False 
    except IOError as err: 
     print("Failed to access artefact in this directory.\n" + str(err)) 
     return False 
    return True 


def main(): 

    parser = argparse.ArgumentParser() 
    parser.add_argument("bucket", help="Name of the existing S3 bucket") 
    parser.add_argument("artefact", help="Name of the artefact to be uploaded to S3") 
    parser.add_argument("bucket_key", help="Name of the S3 Bucket key") 
    args = parser.parse_args() 

    if not upload_to_s3(args.bucket, args.artefact, args.bucket_key): 
     sys.exit(1) 

if __name__ == "__main__": 
    main() 

这要求我列出回购每一个文件在YAML文件作为另一个命令。我只是想要抓住一切并将其上传到S3。

+1

具体什么是问题? – jbird

+0

@jbird参见编辑 – scottndecker

+0

@jbird他问了如何使用AWS Labs为BitBucket管道提供的示例递归地将多个文件发送到S3的基本问题。 @scottndecker我也有同样的问题。在竹我跑一个shell脚本来处理: '''#/斌/庆典 出口AWS_ACCESS_KEY_ID = $ {} bamboo.awsAccessKeyId出口 = AWS_SECRET_ACCESS_KEY $ {} bamboo.awsSecretAccessKeyPassword出口 = AWS_DEFAULT_REGION美国东西-1 aws s3 sync dist/library s3:// yourbuckethere/- 删除 aws s3 sync dist/library s3:// yourbuckethere /''' 管道中还没有运气 –

回答

2

自己想出来。这里是Python文件,“s3_upload.py”

from __future__ import print_function 
import os 
import sys 
import argparse 
import boto3 
#import zipfile 
from botocore.exceptions import ClientError 

def upload_to_s3(bucket, artefact, is_folder, bucket_key): 
    try: 
     client = boto3.client('s3') 
    except ClientError as err: 
     print("Failed to create boto3 client.\n" + str(err)) 
     return False 
    if is_folder == 'true': 
     for root, dirs, files in os.walk(artefact, topdown=False): 
      print('Walking it') 
      for file in files: 
       #add a check like this if you just want certain file types uploaded 
       #if file.endswith('.js'): 
       try: 
        print(file) 
        client.upload_file(os.path.join(root, file), bucket, os.path.join(root, file)) 
       except ClientError as err: 
        print("Failed to upload artefact to S3.\n" + str(err)) 
        return False 
       except IOError as err: 
        print("Failed to access artefact in this directory.\n" + str(err)) 
        return False 
       #else: 
       # print('Skipping file:' + file) 
    else: 
     print('Uploading file ' + artefact) 
     client.upload_file(artefact, bucket, bucket_key) 
    return True 


def main(): 

    parser = argparse.ArgumentParser() 
    parser.add_argument("bucket", help="Name of the existing S3 bucket") 
    parser.add_argument("artefact", help="Name of the artefact to be uploaded to S3") 
    parser.add_argument("is_folder", help="True if its the name of a folder") 
    parser.add_argument("bucket_key", help="Name of file in bucket") 
    args = parser.parse_args() 

    if not upload_to_s3(args.bucket, args.artefact, args.is_folder, args.bucket_key): 
     sys.exit(1) 

if __name__ == "__main__": 
    main() 

,这里是他们到位桶,pipelines.yml文件:

--- 
image: python:3.5.1 

pipelines: 
    branches: 
    dev: 
     - step: 
      script: 
      - pip install boto3==1.4.1 # required for s3_upload.py 
      - pip install requests 
      # the first argument is the name of the existing S3 bucket to upload the artefact to 
      # the second argument is the artefact to be uploaded 
      # the third argument is if the artefact is a folder 
      # the fourth argument is the bucket_key to use 
      - python s3_emptyBucket.py dev-slz-processor-repo 
      - python s3_upload.py dev-slz-processor-repo lambda true lambda 
      - python s3_upload.py dev-slz-processor-repo node_modules true node_modules 
      - python s3_upload.py dev-slz-processor-repo config.dev.json false config.json 
    stage: 
     - step: 
      script: 
      - pip install boto3==1.3.0 # required for s3_upload.py 
      - python s3_emptyBucket.py staging-slz-processor-repo 
      - python s3_upload.py staging-slz-processor-repo lambda true lambda 
      - python s3_upload.py staging-slz-processor-repo node_modules true node_modules 
      - python s3_upload.py staging-slz-processor-repo config.staging.json false config.json 
    master: 
     - step: 
      script: 
      - pip install boto3==1.3.0 # required for s3_upload.py 
      - python s3_emptyBucket.py prod-slz-processor-repo 
      - python s3_upload.py prod-slz-processor-repo lambda true lambda 
      - python s3_upload.py prod-slz-processor-repo node_modules true node_modules 
      - python s3_upload.py prod-slz-processor-repo config.prod.json false config.json 

至于Dev分支的例子,它抓住一切都在“ lambda“文件夹,遍历该文件夹的整个结构,并找到每个项目,然后将其上传到dev-slz-processor-repo存储桶中在上传新的对象之前,删除所有对象:

from __future__ import print_function 
import os 
import sys 
import argparse 
import boto3 
#import zipfile 
from botocore.exceptions import ClientError 

def empty_bucket(bucket): 
    try: 
     resource = boto3.resource('s3') 
    except ClientError as err: 
     print("Failed to create boto3 resource.\n" + str(err)) 
     return False 
    print("Removing all objects from bucket: " + bucket) 
    resource.Bucket(bucket).objects.delete() 
    return True 


def main(): 

    parser = argparse.ArgumentParser() 
    parser.add_argument("bucket", help="Name of the existing S3 bucket to empty") 
    args = parser.parse_args() 

    if not empty_bucket(args.bucket): 
     sys.exit(1) 

if __name__ == "__main__": 
    main() 
+0

不错。感谢发布。要尝试一下我自己,并回复你upvote。 –

+0

工程像一个顶部。多谢,伙计。虽然有一个问题。我看到你从BOTO 1.3.0升级到1.4.1。这是一项要求还是仅仅是个人偏好?此外,AWS的标准脚本似乎已经用最新的代码取代了现有的代码,但那只是用它们的示例linux应用程序进行测试。试图上传和替换多个密钥/文件时是否看到差异? –

+0

@isaacweathers好问题。 1)由于s3_emptyBucket函数中的这一行,将boto升级到了1.4.1:“resource.Bucket(bucket).objects.delete()”,我认为它在1.3.0中不可用。 2)我首先清空存储桶,因为如果文件从源文件夹中删除,我不希望它在s3存储桶中挂起 – scottndecker

1

可以更改使用泊坞窗https://hub.docker.com/r/abesiyo/s3/

它运行得很好

到位桶,pipelines.yml

image: abesiyo/s3 

pipelines: 
    default: 
     - step: 
      script: 
      - s3 --region "us-east-1" rm s3://<bucket name> 
      - s3 --region "us-east-1" sync . s3://<bucket name> 

也请到位桶管道 AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY设置环境变量

0

用于部署静态网站亚马逊S3我有这个到位桶,pipelines.yml配置文件:

image: attensee/s3_website 

pipelines: 
    default: 
    - step: 
     script: 
      - s3_website push 

我使用attensee/s3_website码头工人形象,因为一个有真棒s3_website工具已安装。 s3_website(s3_website.yml)的配置文件[在到位桶仓库的根目录下创建这个文件]看起来是这样的:

s3_id: <%= ENV['S3_ID'] %> 
s3_secret: <%= ENV['S3_SECRET'] %> 
s3_bucket: bitbucket-pipelines 
site : . 

我们必须定义环境变量S3_ID和S3_SECRET环境变量从位桶设置

Thankx到https://www.savjee.be/2016/06/Deploying-website-to-ftp-or-amazon-s3-with-BitBucket-Pipelines/ 为解决

2

对我下面的工作,这是我的yaml文件,其中包括与官方的AWS命令行工具泊坞窗图像:cgswong/aws。非常方便,比bitbucket推荐的更强大(abesiyo/s3)。

image: cgswong/aws 

pipelines: 
    branches: 
    master: 
     - step: 
      script: 
      - aws s3 --region "us-east-1" sync public/ s3://static-site-example.activo.com --cache-control "public, max-age=14400" --delete 

的几个注意事项:

  1. 确保矿井,而不是输入您的S3存储桶的名字。
  2. 为代码设置正确的文件夹源代码,它可以是根目录'/'或任何更深的文件,它将同步下面的所有文件。
  3. '--delete'选项删除了从文件夹中删除的对象,决定是否需要它。
  4. --cache-control可帮助您为s3存储桶中的每个文件设置缓存控制标题元。如果你需要它,请设置它。
  5. 注意我将此命令附加到主分区分支的任何提交,如果需要调整。

以下是文章全文:Continuous Deployment with Bitbucket Pipelines, S3, and CloudFront