<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
  <channel>
    <title>Learn by doing</title>
    <link>https://acdongpgm.tistory.com/</link>
    <description>E-mail : alswhddh@naver.com / 자연어처리와 MLops 를 연구하고 있는 스타트업 개발자입니다.</description>
    <language>ko</language>
    <pubDate>Tue, 14 Apr 2026 12:32:39 +0900</pubDate>
    <generator>TISTORY</generator>
    <ttl>100</ttl>
    <managingEditor>Acdong</managingEditor>
    
    <item>
      <title>[NLP]. 한국어(Korean)에서 반말(informal)을 존댓말(formal)로 바꿔주는 변환기(convertor) - korean Formal Convertor Using Deep Learning</title>
      <link>https://acdongpgm.tistory.com/355</link>
      <description>&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;600&quot; data-origin-height=&quot;450&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/lgO9p/btslnc6cRsf/bXsrKakILKzeZBtQBJMCY0/img.gif&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/lgO9p/btslnc6cRsf/bXsrKakILKzeZBtQBJMCY0/img.gif&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/lgO9p/btslnc6cRsf/bXsrKakILKzeZBtQBJMCY0/img.gif&quot; srcset=&quot;https://blog.kakaocdn.net/dn/lgO9p/btslnc6cRsf/bXsrKakILKzeZBtQBJMCY0/img.gif&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;526&quot; height=&quot;395&quot; data-origin-width=&quot;600&quot; data-origin-height=&quot;450&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;존댓말과 반말은 한국어에서만 존재합니다, 본 모델은 반말(informal)을 존댓말(formal)로 바꿔주는 변환기(convertor) 입니다. &lt;br /&gt;&lt;br /&gt;*확보한 존댓말 데이터셋에는 &quot;해요체&quot;와 &quot;합쇼체&quot; 두 종류가 존재했지만 본 모델은 &quot;해요체&quot;로 통일하여 변환하기로 결정했습니다.&lt;/p&gt;
&lt;table data-ke-align=&quot;alignLeft&quot;&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;합쇼체&lt;/th&gt;
&lt;th&gt;*해요체&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;안녕하십니까.&lt;/td&gt;
&lt;td&gt;안녕하세요.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;좋은 아침입니다.&lt;/td&gt;
&lt;td&gt;좋은 아침이에요.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;바쁘시지 않았으면 좋겠습니다.&lt;/td&gt;
&lt;td&gt;바쁘시지 않았으면 좋겠어요.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;배경&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;이전에 존댓말과 반말을 구분하는 분류기(&lt;a href=&quot;https://github.com/jongmin-oh/korean-formal-classifier&quot;&gt;https://github.com/jongmin-oh/korean-formal-classifier&lt;/a&gt;) 를 학습했습니다.&lt;br /&gt;&lt;br /&gt;분류기로 말투를 나눠 사용하려했지만, 상대적으로 존댓말의 비중이 적었고 반말을 존댓말로 바꾸어 존댓말 데이터의 비중을 늘리기위해 만들게 되었습니다.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;한국어 존댓말 변환기&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;존댓말 변환기는 T5모델 아키텍쳐를 기반으로한 Text2Text generation Task를 수행함으로 반말을 존댓말로 변환하여 사용할 수 있습니다.&lt;/li&gt;
&lt;li&gt;바로 사용하실 분들은 밑에 예제 코드 참고해서 huggingFace 모델('j5ng/et5-formal-convertor') 다운받아 사용하실 수 있습니다.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Base on PLM model(ET5)&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;ETRI(&lt;a href=&quot;https://aiopen.etri.re.kr/et5Model&quot;&gt;https://aiopen.etri.re.kr/et5Model&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Base on Dataset&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;AI허브(&lt;a href=&quot;https://www.aihub.or.kr/&quot;&gt;https://www.aihub.or.kr/&lt;/a&gt;) : 한국어 어체 변환 코퍼스
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;KETI 일상오피스 대화 1,254 문장&lt;/li&gt;
&lt;li&gt;수동태깅 병렬데이터&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;스마일게이트 말투 데이터 셋(korean SmileStyle Dataset)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Preprocessing&lt;/h3&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;반말/존댓말 데이터 분리(&quot;해요체&quot;만 분리)
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;스마일게이트 데이터에서 (['formal','informal']) 칼럼만 사용&lt;/li&gt;
&lt;li&gt;수동태깅 병렬데이터에서 [&quot;&lt;i&gt;.ban&quot;, &quot;&lt;/i&gt;.yo&quot;] txt 파일만 사용&lt;/li&gt;
&lt;li&gt;KETI 일상오피스 데이터에서([&quot;반말&quot;,&quot;해요체&quot;]) 칼럼만 사용&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;데이터 셋 병합(3가지 데이터 셋 병합)&lt;/li&gt;
&lt;li&gt;마침표(.)와 쉼표(,)제거&lt;/li&gt;
&lt;li&gt;반말(informal) 칼럼 중복 제거 : 1632개 중복데이터 제거&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;최종 학습데이터 예시&lt;/h3&gt;
&lt;table data-ke-align=&quot;alignLeft&quot;&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;informal&lt;/th&gt;
&lt;th&gt;formal&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;응 고마워&lt;/td&gt;
&lt;td&gt;네 감사해요&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;나도 그 책 읽었어 굉장히 웃긴 책이였어&lt;/td&gt;
&lt;td&gt;저도 그 책 읽었습니다 굉장히 웃긴 책이였어요&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;미세먼지가 많은 날이야&lt;/td&gt;
&lt;td&gt;미세먼지가 많은 날이네요&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;괜찮겠어?&lt;/td&gt;
&lt;td&gt;괜찮으실까요?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;아니야 회의가 잠시 뒤에 있어 준비해줘&lt;/td&gt;
&lt;td&gt;아니에요 회의가 잠시 뒤에 있어요 준비해주세요&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;total : 14,992 쌍&lt;/h4&gt;
&lt;hr data-ke-style=&quot;style1&quot; /&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;How to use&lt;/h2&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer

# T5 모델 로드
model = T5ForConditionalGeneration.from_pretrained(&quot;j5ng/et5-formal-convertor&quot;)
tokenizer = T5Tokenizer.from_pretrained(&quot;j5ng/et5-formal-convertor&quot;)

device = &quot;cuda:0&quot; if torch.cuda.is_available() else &quot;cpu&quot;
# device = &quot;mps:0&quot; if torch.cuda.is_available() else &quot;cpu&quot; # for mac m1

model = model.to(device) 

# 예시 입력 문장
input_text = &quot;나 진짜 화났어 지금&quot;

# 입력 문장 인코딩
input_encoding = tokenizer(&quot;존댓말로 바꿔주세요: &quot; + input_text, return_tensors=&quot;pt&quot;)

input_ids = input_encoding.input_ids.to(device)
attention_mask = input_encoding.attention_mask.to(device)

# T5 모델 출력 생성
output_encoding = model.generate(
    input_ids=input_ids,
    attention_mask=attention_mask,
    max_length=128,
    num_beams=5,
    early_stopping=True,
)

# 출력 문장 디코딩
output_text = tokenizer.decode(output_encoding[0], skip_special_tokens=True)

# 결과 출력
print(output_text) # 저 진짜 화났습니다 지금.&lt;/code&gt;&lt;/pre&gt;
&lt;hr data-ke-style=&quot;style1&quot; /&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;With Transformer Pipeline&lt;/h2&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer, pipeline

model = T5ForConditionalGeneration.from_pretrained('j5ng/et5-formal-convertor')
tokenizer = T5Tokenizer.from_pretrained('j5ng/et5-formal-convertor')

typos_corrector = pipeline(
    &quot;text2text-generation&quot;,
    model=model,
    tokenizer=tokenizer,
    device=0 if torch.cuda.is_available() else -1,
    framework=&quot;pt&quot;,
)

input_text = &quot;널 가질 수 있을거라 생각했어&quot;
output_text = typos_corrector(&quot;존댓말로 바꿔주세요: &quot; + input_text,
            max_length=128,
            num_beams=5,
            early_stopping=True)[0]['generated_text']

print(output_text) # 당신을 가질 수 있을거라 생각했습니다.&lt;/code&gt;&lt;/pre&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Thanks to&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;존댓말 변환기의 학습은 인공지능산업융합사업단(AICA)의 GPU 리소스를 지원받아 학습되었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;Github : https://github.com/jongmin-oh/korean-formal-convertor&lt;/blockquote&gt;</description>
      <category>Machine learning/NLP</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/355</guid>
      <comments>https://acdongpgm.tistory.com/355#entry355comment</comments>
      <pubDate>Sun, 25 Jun 2023 18:39:00 +0900</pubDate>
    </item>
    <item>
      <title>[NLP]. 한국어 메신저(구어체)대화 맞춤법(typos) 오타 교정기(Corrector) 모델 : Korean Typos(Spelling) Corrector Using Deep Learning</title>
      <link>https://acdongpgm.tistory.com/354</link>
      <description>&lt;h2 data-ke-size=&quot;size26&quot;&gt;한국어 맞춤법 교정기&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;ETRI-et5 모델을 기반으로 fine-tuning한 한국어 구어체 전용 맞춤법 교정기 입니다.&lt;/li&gt;
&lt;li&gt;바로 사용하실 분들은 밑에 예제 코드 참고해서 모델('j5ng/et5-typos-corrector') 다운받아 사용하실 수 있습니다.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Base on PLM model(ET5)&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;ETRI(&lt;a href=&quot;https://aiopen.etri.re.kr/et5Model&quot;&gt;https://aiopen.etri.re.kr/et5Model&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Base on Dataset&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;모두의 말뭉치(&lt;a href=&quot;https://corpus.korean.go.kr/request/reausetMain.do?lang=ko&quot;&gt;https://corpus.korean.go.kr/request/reausetMain.do?lang=ko&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;맞춤법 교정 데이터&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;예시&lt;/h3&gt;
&lt;table data-ke-align=&quot;alignLeft&quot;&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;original&lt;/th&gt;
&lt;th&gt;corrected&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;이런게 눔 ㄱ ㅣ찮아서 ㅠㅠ&lt;/td&gt;
&lt;td&gt;이런 게 넘 귀찮아서 ㅠㅠ&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;어쩌다 가게되써&lt;/td&gt;
&lt;td&gt;어쩌다 가게 됐어?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;이따 얘기하쟈&lt;/td&gt;
&lt;td&gt;이따 얘기하자&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ㅋㅋㅋㅋㅋㅋ언넝 맞이해&lt;/td&gt;
&lt;td&gt;ㅋㅋㅋㅋㅋㅋ 얼른 맞이해&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;그냥 일을안가르쳐주고&lt;/td&gt;
&lt;td&gt;그냥 일을 안 가르쳐 주고&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Data Preprocessing&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;특수문자 제거 (쉼표) .(마침표) 제거&lt;/li&gt;
&lt;li&gt;null 값(&quot;&quot;) 제거&lt;/li&gt;
&lt;li&gt;너무 짧은 문장 제거(길이 2 이하)&lt;/li&gt;
&lt;li&gt;문장 내 &amp;amp;name&amp;amp;, name1 등 이름 태그가 포함된 단어 제거(단어만 제거하고 문장은 살림)&lt;/li&gt;
&lt;li&gt;total : 318,882 쌍&lt;/li&gt;
&lt;/ul&gt;
&lt;hr data-ke-style=&quot;style1&quot; /&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;How to use&lt;/h2&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer

# T5 모델 로드
model = T5ForConditionalGeneration.from_pretrained(&quot;j5ng/et5-typos-corrector&quot;)
tokenizer = T5Tokenizer.from_pretrained(&quot;j5ng/et5-typos-corrector&quot;)

device = &quot;cuda:0&quot; if torch.cuda.is_available() else &quot;cpu&quot;
# device = &quot;mps:0&quot; if torch.cuda.is_available() else &quot;cpu&quot; # for mac m1

model = model.to(device) 

# 예시 입력 문장
input_text = &quot;아늬 진짜 무ㅓ하냐고&quot;

# 입력 문장 인코딩
input_encoding = tokenizer(&quot;맞춤법을 고쳐주세요: &quot; + input_text, return_tensors=&quot;pt&quot;)

input_ids = input_encoding.input_ids.to(device)
attention_mask = input_encoding.attention_mask.to(device)

# T5 모델 출력 생성
output_encoding = model.generate(
    input_ids=input_ids,
    attention_mask=attention_mask,
    max_length=128,
    num_beams=5,
    early_stopping=True,
)

# 출력 문장 디코딩
output_text = tokenizer.decode(output_encoding[0], skip_special_tokens=True)

# 결과 출력
print(output_text) # 아니 진짜 뭐 하냐고.&lt;/code&gt;&lt;/pre&gt;
&lt;hr data-ke-style=&quot;style1&quot; /&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;With Transformer Pipeline&lt;/h2&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer, pipeline

model = T5ForConditionalGeneration.from_pretrained('j5ng/et5-typos-corrector')
tokenizer = T5Tokenizer.from_pretrained('j5ng/et5-typos-corrector')

typos_corrector = pipeline(
    &quot;text2text-generation&quot;,
    model=model,
    tokenizer=tokenizer,
    device=0 if torch.cuda.is_available() else -1,
    framework=&quot;pt&quot;,
)

input_text = &quot;완죤 어이업ㅅ네진쨬ㅋㅋㅋ&quot;
output_text = typos_corrector(&quot;맞춤법을 고쳐주세요: &quot; + input_text,
            max_length=128,
            num_beams=5,
            early_stopping=True)[0]['generated_text']

print(output_text) # 완전 어이없네 진짜 ᄏᄏᄏᄏ.&lt;/code&gt;&lt;/pre&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Thanks to&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;맞춤법 교정기의 학습은 인공지능산업융합사업단(AICA)의 GPU 리소스를 지원받아 학습되었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;Github : &lt;a href=&quot;https://github.com/jongmin-oh/korean-typos-corrector&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://github.com/jongmin-oh/korean-typos-corrector&lt;/a&gt;&lt;/blockquote&gt;
&lt;figure id=&quot;og_1687685835660&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;object&quot; data-og-title=&quot;GitHub - jongmin-oh/korean-typos-corrector: 한국어 메신저(구어체)대화 맞춤법(typos) 오타 교정기(Corrector) &quot; data-og-description=&quot;한국어 메신저(구어체)대화 맞춤법(typos) 오타 교정기(Corrector) 입니다. . Contribute to jongmin-oh/korean-typos-corrector development by creating an account on GitHub.&quot; data-og-host=&quot;github.com&quot; data-og-source-url=&quot;https://github.com/jongmin-oh/korean-typos-corrector&quot; data-og-url=&quot;https://github.com/jongmin-oh/korean-typos-corrector&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/lUsnW/hyS6yIUFdM/MUvukDpBNHBzfbNSOZGax0/img.png?width=1200&amp;amp;height=600&amp;amp;face=0_0_1200_600&quot;&gt;&lt;a href=&quot;https://github.com/jongmin-oh/korean-typos-corrector&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://github.com/jongmin-oh/korean-typos-corrector&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/lUsnW/hyS6yIUFdM/MUvukDpBNHBzfbNSOZGax0/img.png?width=1200&amp;amp;height=600&amp;amp;face=0_0_1200_600');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;GitHub - jongmin-oh/korean-typos-corrector: 한국어 메신저(구어체)대화 맞춤법(typos) 오타 교정기(Corrector)&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;한국어 메신저(구어체)대화 맞춤법(typos) 오타 교정기(Corrector) 입니다. . Contribute to jongmin-oh/korean-typos-corrector development by creating an account on GitHub.&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;github.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>Machine learning/NLP</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/354</guid>
      <comments>https://acdongpgm.tistory.com/354#entry354comment</comments>
      <pubDate>Sun, 25 Jun 2023 18:34:55 +0900</pubDate>
    </item>
    <item>
      <title>[NLP]. 크로스 인코더(Cross Encoder) Onnx Runtime 양자화 하기</title>
      <link>https://acdongpgm.tistory.com/353</link>
      <description>&lt;h3 data-ke-size=&quot;size23&quot;&gt;문제 발생&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;크로스 인코더(Cross-Encoder)는 바이 인코더(Bi-Encoder)와 다르게 질문(S1)과 답변(S2)을 함께 임베딩하고&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그 유사도를 학습하는 방식이다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;479&quot; data-origin-height=&quot;267&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dIDdCb/btsf58pkJP4/GDdCZG5bCfrW2WOQfTXYR0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dIDdCb/btsf58pkJP4/GDdCZG5bCfrW2WOQfTXYR0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dIDdCb/btsf58pkJP4/GDdCZG5bCfrW2WOQfTXYR0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdIDdCb%2Fbtsf58pkJP4%2FGDdCZG5bCfrW2WOQfTXYR0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;479&quot; height=&quot;267&quot; data-origin-width=&quot;479&quot; data-origin-height=&quot;267&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;일반적으로 속도 때문에 Bi-Encoder를 사용해서 미리 임베딩한 뒤 비교하는 방식을 사용했지만,&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;크로스 인코더를 사용하면 유사도 스코어가 많이 증가한다, (실제로 73 -&amp;gt; 83 까지 올라갔다.)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;하지만 크로스 인코더의 단점은 바로 속도...&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;최대한 가벼운 모델을 사용해 속도를 최소화 했고 이미 많이 사용해서 쓰고있는 Onnx 모델로 변환하고 양자화를 사용해 모델 속도를 최소화 하려고했다. &lt;b&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;하지만 Sentence-Transfomer 모듈에서 학습한 모델은 임베딩만 해주더라.&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;ONNX Runtime을 사용하였더니 output 값이 (1, 64, 256)의 차원으로 나왔다&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;내가 원하는건&lt;span style=&quot;color: #006dd7;&quot;&gt;&lt;b&gt; 0~1사이의 유사도 값&lt;/b&gt;&lt;/span&gt;인데 말이다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style6&quot; /&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size23&quot;&gt;원인&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;원인은 모델 저장 방식에 있었다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Sentence-Transfomer 는 CrossEncoder로 학습시켜도 임베딩 모델을 저장하고 실제 Predict할때는&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;transformers 의 AutoModelForSequenceClassification를 사용했다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AutoModelForSequenceClassification를 사용할 경우 뒤에 &lt;br /&gt;&lt;b&gt;&quot;Linear(in_features=256, out_features=1, bias=True)&quot;&lt;/b&gt; 레이어가 마지막에 붙어&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;(1, 64, 256)의 값을 하나의 스칼라 값으로 바꿔준다.&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style6&quot; /&gt;
&lt;h3 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size23&quot;&gt;해결&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Onnx 모델에도 마지막에 Linear 레이어를 추가해서 결과값을 스칼라 값으로 바꿔주면 해결 되는 문제였다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그럼 Sentence-Transfomer CrossEncoder의 아웃풋과 Onnx 모델의 아웃풋이 동일해질 것이다.&lt;/p&gt;
&lt;pre id=&quot;code_1684228977093&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;import transformers
import transformers.convert_graph_to_onnx as onnx_convert
from onnxruntime.quantization import quantize_dynamic, QuantType

# 파이프라인으로 분류모델로 변경
pipeline = transformers.pipeline(
    &quot;text-classification&quot;, model=model, tokenizer=tokenizer)

# CPU 전용 Onnx 모델로 변경
model = model.to('cpu')
onnx_convert.convert_pytorch(pipeline, opset=11, output=Path(&quot;cross_encoder.onnx&quot;),
	use_external_format=False)

# Onnx 모델 가중치 양자화
quantize_dynamic(&quot;cross_encoder.onnx&quot;, &quot;cross_encoder_uint8.onnx&quot;, 
                 weight_type=QuantType.QUInt8)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;구글링과 챗GPT를 열심히 괴롭힌 결과 해결방법을 찾았다.&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;트렌스포머의 파이프라인을 정의한 이후 파이프라인을 Onnx 모델로 변환하는 것이다.&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre id=&quot;code_1684229293490&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;from transformers import AutoTokenizer

import onnxruntime as rt
sess = rt.InferenceSession(f&quot;cross_uint8.onnx&quot;, providers=['CPUExecutionProvider'])

tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
sentence = tokenizer([[&quot;안녕&quot;,&quot;오 안녕 반가워&quot;]], padding=True, truncation='longest_first', return_tensors=&quot;pt&quot;, max_length=256)

# Tensor 형태를 numpy로 변경
input_feed = {
    &quot;input_ids&quot;: np.array(sentence['input_ids']),
    &quot;attention_mask&quot;: np.array(sentence['attention_mask']),
    &quot;token_type_ids&quot;: np.array(sentence['token_type_ids'])
}

print(sess.run(None, input_feed)[0]) # array([[4.698374]], dtype=float32)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;[&quot;안녕&quot;,&quot;오 안녕 반가워&quot;] 의 두 연결된 대화는 &lt;b&gt;&quot;4.698374&quot;&lt;/b&gt; 라는 스칼라 값을 얻었다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 이것에 Sigmoid 함수를 적용하면 0~1의 값을 얻을 수 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre id=&quot;code_1684229374963&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;def sigmoid(x):
    return 1 / (1 +np.exp(-x))
    
sigmoid(sess.run(None, input_feed)[0]) # array([[0.99097216]], dtype=float32)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style2&quot;&gt;최종적으로 0.99097 의 값이 나왔다, 즉 &quot;안녕&quot; 다음에 &quot;오 안녕 반가워&quot;는 99% 자연스럽다고 볼 수 있다.&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;*[안녕 , 자동차]는 0.123의 값이 나왔다. 모두 변환 전 CrossEncoder 의 Predict 결과 값과 동일하다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot; /&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;&amp;nbsp;&lt;/span&gt;결론&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;크로스 인코더의 단점은 느린 속도다, 하지만 난 가벼운 모델과 양자화를 통해 속도를&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;&lt;span style=&quot;color: #006dd7;&quot;&gt;7.1ms(0.0071s) 에서 803 us(0.000803s)로 8~9배 개선&lt;/span&gt;&lt;/b&gt;시켰고&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그러면서도 &lt;b&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;높은 정확도를 유지했다&lt;/span&gt;&lt;/b&gt;.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>Machine learning/NLP</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/353</guid>
      <comments>https://acdongpgm.tistory.com/353#entry353comment</comments>
      <pubDate>Tue, 16 May 2023 18:43:49 +0900</pubDate>
    </item>
    <item>
      <title>[NLP]. Ray와 Sklearn Pipeline을 사용하여 Pandas 데이터 전처리하기</title>
      <link>https://acdongpgm.tistory.com/352</link>
      <description>&lt;pre id=&quot;code_1680675885874&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;import pandas as pd
import ray
import re
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import FunctionTransformer

# 예시 데이터프레임
df = pd.DataFrame({'question': ['Q1', 'Q2', 'Q3'],
                   'answer': ['&amp;lt;p&amp;gt;Answer 1&amp;lt;/p&amp;gt;', '&amp;lt;span&amp;gt;Answer 2&amp;lt;/span&amp;gt;', '&amp;lt;div&amp;gt;Answer 3&amp;lt;/div&amp;gt;']})

# HTML 태그 제거 함수
@ray.remote
def remove_html_tags(text):
    clean_text = re.sub('&amp;lt;.*?&amp;gt;', '', text) # 정규식을 이용하여 HTML 태그 제거
    return clean_text

# 데이터프레임의 'answer' 칼럼에 ray를 사용하여 HTML 태그 제거하는 함수
def remove_html_tags_parallel(texts):
    return ray.get([remove_html_tags.remote(text) for text in texts])

# Pipeline 정의
pipeline = Pipeline([
    ('html_tags_removal', FunctionTransformer(remove_html_tags_parallel))
])

# 데이터프레임의 'answer' 칼럼에 Pipeline 적용
result = pipeline.fit_transform(df['answer'])

print(result)&lt;/code&gt;&lt;/pre&gt;</description>
      <category>Machine learning/NLP</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/352</guid>
      <comments>https://acdongpgm.tistory.com/352#entry352comment</comments>
      <pubDate>Wed, 5 Apr 2023 15:25:06 +0900</pubDate>
    </item>
    <item>
      <title>[AWS]. EC2 생성 시 기본 셋팅하기(메모용)</title>
      <link>https://acdongpgm.tistory.com/351</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;기본적으로 OS는 아마존 리눅스를 사용했습니다(Centos랑 비슷)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;1. htop 설치&lt;br /&gt;&lt;span style=&quot;background-color: #fcfcfc; color: #666666;&quot;&gt;htop은 운영체제의 CPU, Memory 등의 정보를 실시간으로 모니터링하기 위해서 사용됩니다&lt;/span&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1680095816160&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;sudo yum update -y &amp;amp;&amp;amp; yum install -y htop&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;2. 아나콘다 설치&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;설치경로 : /home/ec2-user/&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre id=&quot;code_1680095982032&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;wget https://repo.anaconda.com/archive/Anaconda3-2023.03-Linux-x86_64.sh&lt;/code&gt;&lt;/pre&gt;
&lt;pre id=&quot;code_1680096109349&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;bash Anaconda3-2023.03-Linux-x86_64.sh&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;환경 변수 설정&lt;/p&gt;
&lt;pre id=&quot;code_1680096356066&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;vi ~/.bashrc&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;맨 윗줄에 추가&lt;/p&gt;
&lt;pre id=&quot;code_1680096393117&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;export PATH=&quot;/home/ec2-user/anaconda3/bin:$PATH&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;적용&lt;/p&gt;
&lt;pre id=&quot;code_1680096413183&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;source ~/.bashrc&lt;/code&gt;&lt;/pre&gt;</description>
      <category>MLops/AWS</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/351</guid>
      <comments>https://acdongpgm.tistory.com/351#entry351comment</comments>
      <pubDate>Wed, 29 Mar 2023 22:27:41 +0900</pubDate>
    </item>
    <item>
      <title>[딥러닝]. 자연어 처리 모델 경량화 순서</title>
      <link>https://acdongpgm.tistory.com/348</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;[모델 경량화 기법 적용 순서]&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;딥러닝 모델의 크기를 줄이는 경량화 기법은 다음과 같이 적용 순서를 결정할 수 있습니다.&lt;/p&gt;
&lt;ol style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Pruning: 불필요한 가중치를 제거하여 모델의 크기를 줄입니다. 모델의 크기가 대폭 축소되면서도 모델의 정확도는 크게 감소하지 않습니다.&lt;/li&gt;
&lt;li&gt;Quantization: 모델의 가중치와 활성화 함수 값을 낮은 비트 수로 표현하여 모델의 크기를 줄입니다. 모델의 크기가 줄어들어 메모리 사용량이 줄어들면서도 모델의 정확도는 크게 감소하지 않습니다.&lt;/li&gt;
&lt;li&gt;&lt;a&gt;Knowledge distillation&lt;/a&gt;: 대규모 모델의 지식을 작은 모델에 전달하여 작은 모델의 성능을 향상시킵니다. 작은 모델이 큰 모델의 성능을 따라잡게 되어 큰 모델의 정확도에 근접한 성능을 얻을 수 있습니다.&lt;/li&gt;
&lt;li&gt;Low-rank approximation: 모델의 가중치 행렬을 저차원으로 근사하여 모델의 크기를 줄입니다. 모델의 크기가 줄어들면서도 모델의 정확도는 크게 감소하지 않습니다.&lt;/li&gt;
&lt;li&gt;Knowledge-based activation pruning: 활성화 함수를 제거하여 모델의 크기를 줄입니다. 이 방법은 다른 경량화 기법을 모두 적용한 이후에 적용하는 것이 좋습니다.&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 순서는 모델의 특성과 용도, 경량화 기법 간의 상호작용 등을 고려하여 결정하는 것이 중요합니다. 경량화 기법을 적용하기 전에 충분한 분석과 검증이 필요합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>Machine learning/Deep Learning</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/348</guid>
      <comments>https://acdongpgm.tistory.com/348#entry348comment</comments>
      <pubDate>Thu, 23 Mar 2023 21:26:58 +0900</pubDate>
    </item>
    <item>
      <title>[Elastic]. elastic search Docker Setup</title>
      <link>https://acdongpgm.tistory.com/347</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;설치 정보&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Verson 8.4.2&lt;/li&gt;
&lt;li&gt;Single Node Setting&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://localhost&quot;&gt;localhost&lt;/a&gt; 에 설치&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;네트워크 생성&lt;/h3&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;docker network create elastic&lt;/code&gt;&lt;/pre&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;도커 이미지 다운로드 &amp;amp; 도커 런&lt;/h3&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;docker run --name es-node01 --net elastic -p 9200:9200 -p 9300:9300 -t docker.elastic.co/elasticsearch/elasticsearch:8.4.2&lt;/code&gt;&lt;/pre&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;패스워드 재설정&lt;/h3&gt;
&lt;pre class=&quot;pgsql&quot;&gt;&lt;code&gt;docker exec -it es-node01 /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;i&gt;패스워드 메모 : Gz-YD7e0hGc*5NsMRjvj&lt;/i&gt;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;인증서파일 복사&lt;/h3&gt;
&lt;pre class=&quot;groovy&quot;&gt;&lt;code&gt;docker cp es-node01:/usr/share/elasticsearch/config/certs/http_ca.crt .&lt;/code&gt;&lt;/pre&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;연결 확인&lt;/h3&gt;
&lt;pre class=&quot;groovy&quot;&gt;&lt;code&gt;curl --cacert http_ca.crt -u elastic:&quot;&amp;lt;password&amp;gt;&quot; https://localhost:9200&lt;/code&gt;&lt;/pre&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Python client 연결&lt;/h3&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;from elasticsearch import Elasticsearch

# Password for the 'elastic' user generated by Elasticsearch
ELASTIC_PASSWORD = &quot;&amp;lt;password&amp;gt;&quot;

# Create the client instance
client = Elasticsearch(
    &quot;https://localhost:9200&quot;,
    ca_certs=&quot;/path/to/http_ca.crt&quot;,
    basic_auth=(&quot;elastic&quot;, ELASTIC_PASSWORD)
)

# Successful response!
client.info()
# {'name': 'instance-0000000000', 'cluster_name': ...}&lt;/code&gt;&lt;/pre&gt;
&lt;hr data-ke-style=&quot;style1&quot; /&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Kibana Setup&lt;/h3&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;docker run --name kib-01 --net elastic -p 5601:5601 docker.elastic.co/kibana/kibana:8.4.2&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;키바나 토큰 생성&lt;/p&gt;
&lt;pre class=&quot;gauss&quot;&gt;&lt;code&gt;docker exec -it es-node01 /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;토큰 메모:&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;ldquo;eyJ2ZXIiOiI4LjQuMiIsImFkciI6WyIxNzIuMjIuMC4yOjkyMDAiXSwiZmdyIjoiMWViZmQ1NjczZjY2NTZmNWY3Mjc2ZjI0NGI4NjBiOGFlY2MwNDk3NGY0NjZhNDQ0OWZkY2RkYTJiNTc5MDY1ZCIsImtleSI6Ikd4ZVRLNFFCUzA2YlZGVUQ3cUQ5OlpmNjNUZGY4VFoyNExaaW1nV1NxeEEifQ==&amp;rdquo;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;키바나 접속&lt;/p&gt;
&lt;pre class=&quot;dts&quot;&gt;&lt;code&gt;http://0.0.0.0:5601/?code=888182&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고 :&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://www.elastic.co/guide/en/kibana/current/docker.html&quot;&gt;https://www.elastic.co/guide/en/elasticsearch/reference/current/reset-password.html&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://www.elastic.co/guide/en/elasticsearch/client/python-api/current/connecting.html&quot;&gt;https://www.elastic.co/guide/en/elasticsearch/client/python-api/current/connecting.html&lt;/a&gt;&lt;/p&gt;</description>
      <category>API/ElasticSearch</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/347</guid>
      <comments>https://acdongpgm.tistory.com/347#entry347comment</comments>
      <pubDate>Wed, 22 Mar 2023 18:57:31 +0900</pubDate>
    </item>
    <item>
      <title>[NLP]. HuggingFace Tokenizer에 token 추가(add)하기</title>
      <link>https://acdongpgm.tistory.com/342</link>
      <description>&lt;pre id=&quot;code_1677479766695&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;from transformers import AutoTokenizer

tokenzer = AutoTokenizer.from_pretrained({model_path})

# new tokens
new_tokens = &quot;[NEW]&quot;

tokenizer.add_special_tokens({&quot;additional_special_tokens&quot; : [new_tokens]})
model.resize_token_embeddings(len(tokenizer))&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;resize_token_embeddings(len(tokenizer))를 안해주게 되면 임베딩 에러 발생&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;+ 추가&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Sbert의 경우&lt;/p&gt;
&lt;pre id=&quot;code_1678239521135&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;# ADD tokens
tokens = [&quot;[NEW]&quot;]
embedding_model = model._first_module()
embedding_model.tokenizer.add_tokens(tokens, special_tokens=True)
embedding_model.auto_model.resize_token_embeddings(
    len(embedding_model.tokenizer))
pooling_model = models.Pooling(
    embedding_model.get_word_embedding_dimension())
model = SentenceTransformer(modules=[embedding_model, pooling_model])&lt;/code&gt;&lt;/pre&gt;</description>
      <category>Machine learning/NLP</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/342</guid>
      <comments>https://acdongpgm.tistory.com/342#entry342comment</comments>
      <pubDate>Mon, 27 Feb 2023 15:36:28 +0900</pubDate>
    </item>
    <item>
      <title>[Event]. Langcon 2023 참가후기.</title>
      <link>https://acdongpgm.tistory.com/341</link>
      <description>&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1512&quot; data-origin-height=&quot;1512&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/EGklx/btr0GJEVRV7/2PfRsyVNzOOswVxFh4pGA1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/EGklx/btr0GJEVRV7/2PfRsyVNzOOswVxFh4pGA1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/EGklx/btr0GJEVRV7/2PfRsyVNzOOswVxFh4pGA1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FEGklx%2Fbtr0GJEVRV7%2F2PfRsyVNzOOswVxFh4pGA1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;399&quot; height=&quot;1512&quot; data-origin-width=&quot;1512&quot; data-origin-height=&quot;1512&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;2023년 2월 18일에 열린 LangCon 2023에 다녀왔다.&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;태크니컬한 얘기는 추후 포스팅하도록 하고 소감을 얘기하고 싶다.&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;일단 블로그와 깃허브 또는 오픈 채팅방에서 봤던 사람들을 실제로 발표자로 만나게 되어 너무 신기했다.&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;나에겐 BTS 보다 더 멋진 사람들이라 가슴이 벅차올랐다.&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;그리고 발표자분들 중 몇 명의 오픈 소스는 실제로 우리 서비스에 적용이 되어있기 때문에 감사인사를 전하고 싶었지만&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;내가 좀 더 훌륭한 개발자가 된 모습으로 감사인사를 전하고 싶어 참았다.&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;1. ELACTRA&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;2. Pecab&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;3. klue&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;등등..&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;앞으로도 이런 콘퍼런스가 있으면 꼭 참여해야겠다.&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;자연어 처리 분야의 전문가들과 함께 하는 자리는&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;비록 얻어 가지 못할지라도, 큰 도움이 된다.&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;(실제 얻어간 부분은 상당하다.)&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;나와 같은 일을 하는 사람들이 많구나,&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;훌륭한 사람들이 나와 같은 일을 하는구나,&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;나도 열심히 하면 저렇게 될 수 있겠구나&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;또 중간에는&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;나도 오픈 소스의 기여해서 많은 사람들에게 도움을 주고 싶다는 생각을 했다.&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;깃허브로 유명해지고 싶다?.. 등등&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;+질문해서 책도 받아왔다. ㅎㅎ&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1512&quot; data-origin-height=&quot;1512&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/ywEdV/btr0GctzoFN/3hYWJ4iKk0MXpdCPRRr6ck/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/ywEdV/btr0GctzoFN/3hYWJ4iKk0MXpdCPRRr6ck/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/ywEdV/btr0GctzoFN/3hYWJ4iKk0MXpdCPRRr6ck/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FywEdV%2Fbtr0GctzoFN%2F3hYWJ4iKk0MXpdCPRRr6ck%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;429&quot; height=&quot;1512&quot; data-origin-width=&quot;1512&quot; data-origin-height=&quot;1512&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;이번 콘퍼런스로 많은 자극을 받았다.&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;인맥을 넓히는 것도 물론 중요하고 앞으로도 넓혀갈 생각이지만&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;(명함 돌리고 소개하고 하는 모습을 많이 봤다.)&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;어느 정도 위치에 올라야 가능한 일이다.&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;일단 내 가치를 끌어올리는데 집중하자 인맥은 그 다음!&lt;/p&gt;</description>
      <category>Self-improvement</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/341</guid>
      <comments>https://acdongpgm.tistory.com/341#entry341comment</comments>
      <pubDate>Sat, 25 Feb 2023 22:54:31 +0900</pubDate>
    </item>
    <item>
      <title>[졸업]. 했습니다</title>
      <link>https://acdongpgm.tistory.com/340</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;졸업식을 따로 하지 않아서 졸업한 지도 모르고 지내다가.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;문득 생각이나서 학교홈페이지에 가서 확인해 보니 졸업이 되어있었다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;29살에 졸업하는게 뭔가 이상하지만 나에겐 좀 뜻깊은 일이다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;처음 전문대에 입학하고 게임기획자가 되고자 했던 나는 산업체를 경험하고 개발자의 필요성, 중요성을 깨달았고&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;뒤늦게 25살에 복학해서 프로그래밍을 배웠다. 물리를 싫어했던 나는 게임개발이 적성에 맞지 않았고&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;통계가 좋아 인공지능을 시작했다. 3번의 교육기관을 걸쳐 취업을 했고, 직장을 다니면서 야간으로 4년제 학위를 취득했다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;내 이전 전문대학교 성적은&lt;b&gt; 2.7&amp;nbsp;&lt;/b&gt; , 3.0이 넘지 않지만 확실히 내가 흥미 있는 분야를 공부하니까 성적은 저절로 따라왔다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;단 한 번도 성적을 위해 공부했던 적이 없다, 그냥 재미있고 배우고 싶어서 했을 뿐&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;졸업을 해봤자 바뀌는 건 문서한장 생기는 것이지만 그래도 직장 다니면서 야간에 열심히 한 나에게 수고했다고 말해주고 싶다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;693&quot; data-origin-height=&quot;233&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/v8CKi/btr0faWIIqK/8vLC9dYIZ1jBD283eyGDok/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/v8CKi/btr0faWIIqK/8vLC9dYIZ1jBD283eyGDok/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/v8CKi/btr0faWIIqK/8vLC9dYIZ1jBD283eyGDok/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fv8CKi%2Fbtr0faWIIqK%2F8vLC9dYIZ1jBD283eyGDok%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;693&quot; height=&quot;233&quot; data-origin-width=&quot;693&quot; data-origin-height=&quot;233&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;</description>
      <category>Self-improvement</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/340</guid>
      <comments>https://acdongpgm.tistory.com/340#entry340comment</comments>
      <pubDate>Tue, 21 Feb 2023 21:28:48 +0900</pubDate>
    </item>
    <item>
      <title>[NLP]. MultipleNegativesRankingLoss 적용기(Sentence Transfomer)</title>
      <link>https://acdongpgm.tistory.com/339</link>
      <description>&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;750&quot; data-origin-height=&quot;500&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/b8vVlB/btrZ85og5Vr/L9aJK6IFXGsR7Lmts2tZRK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/b8vVlB/btrZ85og5Vr/L9aJK6IFXGsR7Lmts2tZRK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/b8vVlB/btrZ85og5Vr/L9aJK6IFXGsR7Lmts2tZRK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fb8vVlB%2FbtrZ85og5Vr%2FL9aJK6IFXGsR7Lmts2tZRK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;561&quot; height=&quot;374&quot; data-origin-width=&quot;750&quot; data-origin-height=&quot;500&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;h2 style=&quot;text-align: left;&quot; data-ke-size=&quot;size26&quot;&gt;&amp;nbsp;&lt;/h2&gt;
&lt;h2 style=&quot;text-align: left;&quot; data-ke-size=&quot;size26&quot;&gt;BackGround&lt;/h2&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size16&quot;&gt;인공지능에게 질문에 대한 답변을 학습시킬 때, 한 가지 문제가 있는데 바로 적합한 질문/답변(Positive)데이터만 존재한다는 것이다.&lt;br /&gt;&lt;br /&gt;지도학습을 할때는 정답과 오답을 같이 줘야 학습하는데 오답(Negative)이 없어 직접 만들어야 함.&lt;br /&gt;그래서 오답 샘플을 만들어내는 많은 아이디어가 등장했다.&lt;/p&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;1. Random Sampling &lt;/b&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size16&quot;&gt;- 가장 일반적인 방법으로 여러 정답 데이터들 중에서 답변만 랜덤으로 섞어 오답이라고 하는 것이다.&lt;/p&gt;
&lt;table style=&quot;border-collapse: collapse; width: 100%;&quot; border=&quot;1&quot; data-ke-align=&quot;alignLeft&quot;&gt;
&lt;tbody&gt;
&lt;tr style=&quot;height: 20px;&quot;&gt;
&lt;td style=&quot;width: 47.2093%; height: 20px; text-align: center;&quot;&gt;&lt;b&gt;배고프다.&lt;/b&gt;&lt;/td&gt;
&lt;td style=&quot;width: 42.3256%; height: 20px; text-align: center;&quot;&gt;&lt;b&gt;얼른 밥먹어 ㅠㅠ&lt;/b&gt;&lt;/td&gt;
&lt;td style=&quot;width: 10.4651%; height: 20px; text-align: center;&quot;&gt;&lt;b&gt;&lt;span style=&quot;color: #006dd7;&quot;&gt;Postive&lt;/span&gt;&lt;/b&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 20px;&quot;&gt;
&lt;td style=&quot;width: 47.2093%; height: 20px; text-align: center;&quot;&gt;&lt;b&gt;피곤하다.&lt;/b&gt;&lt;/td&gt;
&lt;td style=&quot;width: 42.3256%; height: 20px; text-align: center;&quot;&gt;&lt;b&gt;어제 밤샜어??&lt;/b&gt;&lt;/td&gt;
&lt;td style=&quot;width: 10.4651%; height: 20px; text-align: center;&quot;&gt;&lt;b&gt;&lt;span style=&quot;color: #006dd7;&quot;&gt;Postive&lt;/span&gt;&lt;/b&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 17px;&quot;&gt;
&lt;td style=&quot;width: 47.2093%; height: 17px; text-align: center;&quot;&gt;&lt;b&gt;배고프다.&lt;/b&gt;&lt;/td&gt;
&lt;td style=&quot;width: 42.3256%; height: 17px; text-align: center;&quot;&gt;&lt;b&gt;어제 밤샜어??&lt;/b&gt;&lt;/td&gt;
&lt;td style=&quot;width: 10.4651%; height: 17px; text-align: center;&quot;&gt;&lt;b&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;Negative&lt;/span&gt;&lt;/b&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 17px;&quot;&gt;
&lt;td style=&quot;width: 47.2093%; height: 17px; text-align: center;&quot;&gt;&lt;b&gt;피곤하다.&lt;/b&gt;&lt;/td&gt;
&lt;td style=&quot;width: 42.3256%; height: 17px; text-align: center;&quot;&gt;&lt;b&gt;얼른 밥먹어 ㅠㅠ&lt;/b&gt;&lt;/td&gt;
&lt;td style=&quot;width: 10.4651%; height: 17px; text-align: center;&quot;&gt;&lt;b&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;Negative&lt;/span&gt;&lt;/b&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size16&quot;&gt;실제로 이렇게만 학습해도 어느 정도 잘 된다.&lt;/p&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;2. Hard Negative Sampling&lt;/b&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size16&quot;&gt;- 이 방법은 &quot;모델이 헷갈려하는 데이터를 학습시켜야 더 좋다&quot;는 아이디어로 모델에게 풀기 어려운 문제를 주는 것이다.&lt;/p&gt;
&lt;table style=&quot;border-collapse: collapse; width: 100%;&quot; border=&quot;1&quot; data-ke-align=&quot;alignLeft&quot;&gt;
&lt;tbody&gt;
&lt;tr style=&quot;height: 20px;&quot;&gt;
&lt;td style=&quot;width: 47.2093%; height: 20px; text-align: center;&quot;&gt;&lt;b&gt;배고프다.&lt;/b&gt;&lt;/td&gt;
&lt;td style=&quot;width: 42.3256%; height: 20px; text-align: center;&quot;&gt;&lt;b&gt;얼른 밥먹어 ㅠㅠ&lt;/b&gt;&lt;/td&gt;
&lt;td style=&quot;width: 10.4651%; height: 20px; text-align: center;&quot;&gt;&lt;b&gt;&lt;span style=&quot;color: #006dd7;&quot;&gt;Postive&lt;/span&gt;&lt;/b&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 20px;&quot;&gt;
&lt;td style=&quot;width: 47.2093%; height: 20px; text-align: center;&quot;&gt;&lt;b&gt;배고프다.&lt;/b&gt;&lt;/td&gt;
&lt;td style=&quot;width: 42.3256%; height: 20px; text-align: center;&quot;&gt;&lt;b&gt;밥 맛있다&lt;/b&gt;&lt;/td&gt;
&lt;td style=&quot;width: 10.4651%; height: 20px; text-align: center;&quot;&gt;&lt;b&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;Negative&lt;/span&gt;&lt;/b&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 17px;&quot;&gt;
&lt;td style=&quot;width: 47.2093%; height: 17px; text-align: center;&quot;&gt;&lt;b&gt;피곤하다.&lt;/b&gt;&lt;/td&gt;
&lt;td style=&quot;width: 42.3256%; height: 17px; text-align: center;&quot;&gt;&lt;b&gt;어제 밤샜어??&lt;/b&gt;&lt;/td&gt;
&lt;td style=&quot;width: 10.4651%; height: 17px; text-align: center;&quot;&gt;&lt;b&gt;&lt;span style=&quot;color: #006dd7;&quot;&gt;Postive&lt;/span&gt;&lt;/b&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 17px;&quot;&gt;
&lt;td style=&quot;width: 47.2093%; height: 17px; text-align: center;&quot;&gt;&lt;b&gt;피곤하다.&lt;/b&gt;&lt;/td&gt;
&lt;td style=&quot;width: 42.3256%; height: 17px; text-align: center;&quot;&gt;&lt;b&gt;잠 잘잤다~&lt;/b&gt;&lt;/td&gt;
&lt;td style=&quot;width: 10.4651%; height: 17px; text-align: center;&quot;&gt;&lt;b&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;Negative&lt;/span&gt;&lt;/b&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size16&quot;&gt;- 대표적인 방법으로는 BM25로 키워드 일치하면서 내용이 틀린 데이터를 사용하거나&lt;br /&gt;랜덤 샘플링으로 한 번 학습한 다음 그중에서 헷갈려할 만한 걸 추려서 다시 학습하는 방법이 있다.&lt;br /&gt;+ 시제만 변경하는 방법, 주어를 바꾸는 방법, 동사만 바꾸는 방법 등등...&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Hard Negative Sampling &lt;/b&gt;은 성능을 최대로 끌어올릴 수 있는 좋은 방법&lt;/p&gt;
&lt;hr data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style1&quot; /&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;하지만 위 방법들은 직접 오답 데이터를 구축해야 한다.&lt;/b&gt;&lt;br /&gt;MultipleNegativesRankingLoss를 몰랐을 때 나는 Random Sampling을 하여 정답과 오답의 비율을 1:1로 학습시켰다.&lt;br /&gt;결과는 나쁘지 않았지만 MultipleNegativesRankingLoss 사용하면 더 좋아질 수 있다는 것을 알게 되었다.&lt;br /&gt;&lt;br /&gt;MultipleNegativesRankingLoss를 알게 된 경로도 Toss 팀에서 서비스 검색시스템을 구축할 때 사용했다고&lt;br /&gt;콘퍼런스에서 들었기 때문이다.&lt;/p&gt;
&lt;hr data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style6&quot; /&gt;
&lt;h2 style=&quot;text-align: left;&quot; data-ke-size=&quot;size26&quot;&gt;What is MultipleNegativesRankingLoss?&lt;/h2&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;1. 긍정적인 쌍만 있는 경우에 훌륭한 손실 함수입니다.&lt;/b&gt; &lt;br /&gt;- 각 배치 n-1 음성 문서에서 무작위로 샘플링하기 때문입니다.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;2. 각 a_i에 대해 다른 모든 p_j를 음의 샘플로 사용합니다.&lt;/b&gt; &lt;br /&gt;- a_i에 대해 1개의 양의 예(p_i)와 n-1개의 음의 예(p_j)가 있습니다. &lt;br /&gt;&lt;br /&gt;그런 다음 softmax 정규화 점수에 대한 음의 로그 유사도를 최소화합니다.&lt;br /&gt;&lt;br /&gt;&lt;s&gt;기본적으로 TripletLoss에서 아이디어를 얻은 것 같다.&lt;/s&gt; &lt;br /&gt;&lt;br /&gt;알기 쉽게 설명하면 Negative Sample 데이터를 넣지 않아도 알아서 생성해서 학습하게 해 준다는 얘기&lt;br /&gt;실제로도 학습할 때 Positive 데이터만 넣어 학습한다&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Batch가 32일 경우 정답 1개 오답 31개(랜덤샘플링)를 넣고 학습한다. &lt;/b&gt; &lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;단점.&lt;/b&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size16&quot;&gt;1. 긍정/부정 라벨을 애초에 달지 않기 때문에 중간에 Evaluate 할 수 없음&lt;br /&gt;- 학습이 끝난 후에 직접 검증데이터를 만들어서 검증해야 함.&lt;br /&gt;2. 상대적으로 많은 GPU 메모리 점유 + 많은 학습시간을 사용&lt;/p&gt;
&lt;hr data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot; /&gt;
&lt;h2 style=&quot;text-align: left;&quot; data-ke-size=&quot;size26&quot;&gt;How to use train?&lt;/h2&gt;
&lt;h3 style=&quot;text-align: left;&quot; data-ke-size=&quot;size23&quot;&gt;Environment&lt;/h3&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;Python3.9 &lt;br /&gt;Ubuntu20.04&lt;/blockquote&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 style=&quot;text-align: left;&quot; data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;1. Import&lt;/b&gt;&lt;/h4&gt;
&lt;pre class=&quot;python&quot; data-ke-type=&quot;codeblock&quot; data-ke-language=&quot;python&quot;&gt;&lt;code&gt;from sentence_transformers import SentenceTransformer, losses
from sentence_transformers.readers import InputExample

from sentence_transformers import datasets, models

import os
import pandas as pd
from pathlib import Path&lt;/code&gt;&lt;/pre&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 style=&quot;text-align: left;&quot; data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;2. 모델 &amp;amp; 데이터 셋팅하기&lt;/b&gt;&lt;/h4&gt;
&lt;pre class=&quot;python&quot; data-ke-type=&quot;codeblock&quot; data-ke-language=&quot;python&quot;&gt;&lt;code&gt;embedding_model = models.Transformer('klue/roberta-base')
pooler = models.Pooling(
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;embedding_model.get_word_embedding_dimension(),
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;pooling_mode_mean_tokens=True
)
model = SentenceTransformer(modules=[embedding_model, pooler])

model.max_seq_length = 128

data = pd.read_parquet(DATA_PATH)

train_examples = []
for row in data.iterrows():
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;s1 = row[1]['Q']
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;s2 = row[1]['answer']
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;train_examples.append(InputExample(texts=[s1, s2]))&lt;/code&gt;&lt;/pre&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;br /&gt;여기서 특징은 위에 말하바와 같이 Label을 넣어주지 않는 게 핵심&lt;/p&gt;
&lt;h4 style=&quot;text-align: left;&quot; data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;3. 학습하기&lt;/b&gt;&lt;/h4&gt;
&lt;pre class=&quot;python&quot; data-ke-type=&quot;codeblock&quot; data-ke-language=&quot;python&quot;&gt;&lt;code&gt;batch_size = 32
num_epochs = 4

train_dataloader = datasets.NoDuplicatesDataLoader(
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;train_examples, batch_size=batch_size)
train_loss = losses.MultipleNegativesRankingLoss(model=model)

model.fit(
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;train_objectives=[(train_dataloader, train_loss)],
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;epochs=num_epochs,
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;output_path='./save_path',
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;show_progress_bar=True
)&lt;/code&gt;&lt;/pre&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;br /&gt;여기서 NoDuplicatesDataLoader 가 등장하는데 이는 배치를 구성할 때 중복이 나오지 않도록 구성하는 것&lt;br /&gt;중간에 모델을 저장하지 않고 (보통은 중간에 Evaluate 마다 모델을 저장함) 학습이 전부 끝나야지만 모델을 저장&lt;/p&gt;
&lt;h3 style=&quot;text-align: left;&quot; data-ke-size=&quot;size23&quot;&gt;&lt;b&gt;Conclustion&lt;/b&gt;&lt;/h3&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size16&quot;&gt;질문과 답변 데이터셋에서 오답 데이터를 만드는 것은 손이 많이 가고, 그때마다 바꿔서 학습시켜줘야 합니다.&lt;br /&gt;또한 1:1 비율로 정답과 오답을 맞추면 모델이 학습데이터에 없는 답변에 대해서 정답/오답을 가려내기 어렵습니다.&lt;br /&gt;하지만 MultipleNegativesRankingLoss는 Batch 마다 많은 수의 Negative포함시켜 아닌 건 확실하게 아니라고 학습시킵니다. &lt;br /&gt;&lt;br /&gt;~~~~&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;h2 style=&quot;text-align: left;&quot; data-ke-size=&quot;size26&quot;&gt;Reference&lt;/h2&gt;
&lt;blockquote data-ke-style=&quot;style2&quot;&gt;https://www.pinecone.io/learn/fine-tune-sentence-transformers-mnr/ &lt;br /&gt;https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/nli/training_nli_v2.py &lt;br /&gt;https://www.sbert.net/docs/package_reference/losses.html&lt;/blockquote&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>Machine learning/NLP</category>
      <category>MultipleNegativesRankingLoss</category>
      <category>Negative Sampling</category>
      <category>QA</category>
      <category>random sampling</category>
      <category>Sentence Transfomer</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/339</guid>
      <comments>https://acdongpgm.tistory.com/339#entry339comment</comments>
      <pubDate>Tue, 21 Feb 2023 11:16:42 +0900</pubDate>
    </item>
    <item>
      <title>[NLP]. 인공지능에서 골드 라벨(Gold Label)은 뭘까? (feat. ChatGPT)</title>
      <link>https://acdongpgm.tistory.com/338</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;가끔 논문이나 컨퍼런스에가면 골드 라벨, 실버 라벨 등의 용어를 자주 듣게된다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;지도학습에 필요한 라벨은 알겠는데 골드 라벨, 실버 라벨은 뭘까? 등급이 있는 것일까?&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;골드라벨(Gold Lable)이란?&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;ChatGPT에게 한 번 물어봤다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;골드 라벨(Gold Label)은 보통 사람들이 만든 레이블이 아니라 전문가들이 만든 &lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;고품질의 레이블&lt;/b&gt;&lt;/span&gt;을 말합니다. 따라서, 골드 라벨이 부착된 데이터를 사용하면 보다 높은 정확도를 가진 모델을 만들 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;일반적으로, 골드 라벨을 가진 데이터를 얻는 방법은 두 가지입니다.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;ol style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;전문가에게 작업을 의뢰하여 골드 라벨을 만들어내는 방법:&lt;br /&gt;전문가들이 작업한 데이터를 골드 라벨 데이터로 활용할 수 있습니다.&lt;br /&gt;&lt;br /&gt;&lt;/span&gt;&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;이전 연구나 대회에서 골드 라벨을 얻는 방법: &lt;br /&gt;이전 연구나 대회에서 사용한 데이터 중, 골드 라벨이 부착된 데이터가 공개되어 있는 경우가 있습니다. 이 경우 해당 데이터를 활용하여 모델 학습 및 평가를 진행할 수 있습니다.&lt;br /&gt;&lt;br /&gt;&lt;/span&gt;&lt;/b&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;위 두 가지 방법 중, 전문가에게 작업을 의뢰하여 골드 라벨을 만들어내는 것은 높은 정확도를 보장하지만, 시간과 비용이 많이 들어갑니다. 반면, 이전 연구나 대회에서 얻은 데이터는 시간과 비용을 아낄 수 있지만, 데이터의 품질이 보장되지 않을 수도 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;따라서, 골드 라벨 데이터를 확보하기 위해서는 전문가에게 작업을 의뢰하는 것이 가장 좋지만, 이에 대한 시간과 비용적인 부담이 있으므로, 이전 연구나 대회에서 골드 라벨을 얻는 방법도 고려할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;정리하면,&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;골드 라벨은 해당 도메인에 깊은 지식을 가진 사람이 라벨링한 고귀한? 데이터라고 할 수있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실버 라벨은 정확히 정의하지 않았지만 일반적으로 골드 라벨보다 품질이 떨어지지만 일반 사람보다는&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;조금 높은? 데이터가 아닌가 싶다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이를 테면 대규모 언어모델로 생성한 데이터, 일반 사람이 라벨했지만 검증이 잘된 데이터라고 볼 수 있겠다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://chat.openai.com/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://chat.openai.com/&lt;/a&gt;&lt;/p&gt;</description>
      <category>Machine learning/NLP</category>
      <category>glod label</category>
      <category>nlp</category>
      <category>골드 라벨</category>
      <category>데이터</category>
      <category>인공지능 데이터</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/338</guid>
      <comments>https://acdongpgm.tistory.com/338#entry338comment</comments>
      <pubDate>Tue, 21 Feb 2023 10:48:09 +0900</pubDate>
    </item>
    <item>
      <title>[AWS]. S3 Bucket 에서 데이터 다운받기 (with Python)</title>
      <link>https://acdongpgm.tistory.com/337</link>
      <description>&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;259&quot; data-origin-height=&quot;194&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/JkJL8/btrZrzCpJjW/zCSKfpp2mKr3dJvVqMQFvk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/JkJL8/btrZrzCpJjW/zCSKfpp2mKr3dJvVqMQFvk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/JkJL8/btrZrzCpJjW/zCSKfpp2mKr3dJvVqMQFvk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FJkJL8%2FbtrZrzCpJjW%2FzCSKfpp2mKr3dJvVqMQFvk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;259&quot; height=&quot;194&quot; data-origin-width=&quot;259&quot; data-origin-height=&quot;194&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&amp;nbsp;&lt;/h2&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;BackGround&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Demilight', 'Noto Sans KR';&quot;&gt;로컬에서 다양한 데이터 및 인공지능 모델들을 사용하다 배포하려고할때.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Demilight', 'Noto Sans KR';&quot;&gt;전부 데이터를 서버에 옮겨야한다.&amp;nbsp; 이런경우 보통 SCP 명령어를 사용하거나 정말 귀찮을때는 그냥 VS code에서&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Demilight', 'Noto Sans KR';&quot;&gt;드래그 엔 드롭하는데 용량이 클수록 시간이 오래걸린다. &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Demilight', 'Noto Sans KR';&quot;&gt;그래서 이번에 데이터 및 모델을 불러올때 해당 파일이 없을 경우 AWS S3에서 파일을 다운받는 방법을 알아봤다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Demilight', 'Noto Sans KR';&quot;&gt;한 번 해놓으면 다음부터 옮길일 없음 &lt;span style=&quot;color: #006dd7;&quot;&gt;&lt;b&gt;반복적인 일은 자동화하는게 프로그래머의 자세&lt;/b&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;font-family: 'Noto Sans Demilight', 'Noto Sans KR';&quot;&gt;&lt;/span&gt;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style6&quot; /&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;What is AWS S3 ?&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;S3는 워낙 유명한 AWS의 스토리지 서비스이다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Amazon Simple Storage Service(Amazon S3)는 업계 최고의 확장성,&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;데이터 가용성, 보안 및 성능을 제공하는 객체 스토리지 서비스입니다. (라고함.)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;좋은 서비스인건 알겠는데 비용이 항상 문제다. 비용은 얼마나 들까?&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1140&quot; data-origin-height=&quot;158&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/busPXE/btrZHSppFdg/bpf4WT3TroT39eKJhdndk0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/busPXE/btrZHSppFdg/bpf4WT3TroT39eKJhdndk0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/busPXE/btrZHSppFdg/bpf4WT3TroT39eKJhdndk0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbusPXE%2FbtrZHSppFdg%2Fbpf4WT3TroT39eKJhdndk0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1140&quot; height=&quot;158&quot; data-origin-width=&quot;1140&quot; data-origin-height=&quot;158&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;50TB 이상 사용할 일은 없을테니 GB당 0.023 USD 라고 생각하면 되겠다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;가장 큰 데이터 셋의 크기가 3GB 쯤 되는데 100원정도 금액이 지불되겠다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;**S3는 사용할때만 비용이 청구된다.&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot; /&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;How&amp;nbsp;do&amp;nbsp;I&amp;nbsp;download&amp;nbsp;from&amp;nbsp;the&amp;nbsp;S3&amp;nbsp;bucket?&lt;/h2&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Environment&lt;/h3&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;Python3.9&lt;br /&gt;Ubuntu20.04&lt;/blockquote&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&amp;nbsp;&lt;/h4&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;1. Config 정의&lt;/b&gt;&lt;/h4&gt;
&lt;pre id=&quot;code_1676626730038&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;from pydantic import BaseSettings
from typing import Optional

# configs.py
from pathlib import Path


class DbConfig(BaseSettings):

    BASE_DIR: Path = Path(__file__).resolve().parent
    DATA_PATH: Path = BASE_DIR.joinpath(
        &quot;data/multi_unique_n7658745.gzip&quot;)

    S3_BUCKET_NAME: Optional[str] = None
    S3_ACCESS_KEY: Optional[str] = None
    S3_SECRET_KEY: Optional[str] = None

    class Config:
        env_file: str = &quot;.env&quot;


settings = DbConfig()&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;2. 다운로드 함수 선언&lt;/b&gt;&lt;/h4&gt;
&lt;pre id=&quot;code_1675836291484&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;import boto3
import os
import sys

from config import settings


S3_ACCESS_KEY = settings.S3_ACCESS_KEY
S3_SECRET_KEY = settings.S3_SECRET_KEY
S3_BUCKET_NAME = settings.S3_BUCKET_NAME

DATA_PATH = settings.DATA_PATH
BASE_DIR = settings.BASE_DIR

def get_s3_client():
    s3 = boto3.client('s3',
                      aws_access_key_id=S3_ACCESS_KEY,
                      aws_secret_access_key=S3_SECRET_KEY,
                      region_name='ap-northeast-2'
                      )
    return s3


def download(local_file_name, s3_bucket, s3_object_key):
    s3 = get_s3_client()
    meta_data = s3.head_object(Bucket=s3_bucket, Key=s3_object_key)
    total_length = int(meta_data.get('ContentLength', 0))
    downloaded = 0

    def progress(chunk):
        nonlocal downloaded
        downloaded += chunk
        done = int(50 * downloaded / total_length)
        sys.stdout.write(&quot;\r[%s%s]&quot; % ('=' * done, ' ' * (50-done)))
        sys.stdout.flush()

    print(f'Downloading {s3_object_key}')
    with open(local_file_name, 'wb') as f:
        s3.download_fileobj(
            s3_bucket, s3_object_key, f, Callback=progress)


def download_from_s3():
    if not os.path.exists(os.path.join(BASE_DIR, &quot;data&quot;)):
        os.makedirs(os.path.join(BASE_DIR, &quot;data&quot;))
        print(&quot;download answer dataframe&quot;)
        download(DATA_PATH, S3_BUCKET_NAME,
                 'data/multi_unique_n7658745.gzip')
    else:
        print(&quot;answer dataframe already exists&quot;)

    print(&quot;upload setting complete&quot;)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그냥 다운로드를 해도 되지만 나는 Progress Bar 가 나오도록 설계했다 ( Ref 참조 )&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;2. 데이터 불러오기 전 추가하기&lt;/b&gt;&lt;/h4&gt;
&lt;pre id=&quot;code_1675836647273&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;import pandas as pd

def _load_data():
    download_from_s3()
    df = pd.read_parquet(DATA_PATH)
    return df&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style6&quot; /&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Conclusion&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 매번 데이터 및 모델이 바뀔때마다 직접 파일을 서버에 옮겨주지않아도 S3 Bucket에 등록해둔 파일을 없으면 다운받도록 설계했다. &lt;s&gt;귀찮이즘이 역시 기술발전의 원천이다.&lt;/s&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;NLP 모델의 경우는 HuggingFace에서 다운받아서 쓰는게 더 좋다고 생각하지만&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;크기가 큰 모델이나 데이터 셋의 경우는 &lt;b&gt;비용이 아깝지만 ㅠㅠ&lt;/b&gt; S3에서 다운받아서 사용해도 좋을 것 같다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Reference&lt;/h2&gt;
&lt;blockquote data-ke-style=&quot;style2&quot;&gt;https://stackoverflow.com/questions/50100221/download-file-from-aws-s3-using-python&lt;br /&gt;https://stackoverflow.com/questions/41827963/track-download-progress-of-s3-file-using-boto3-and-callbacks&lt;br /&gt;https://aws.amazon.com/ko/s3/pricing/&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>Python</category>
      <category>model download</category>
      <category>Python</category>
      <category>S3</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/337</guid>
      <comments>https://acdongpgm.tistory.com/337#entry337comment</comments>
      <pubDate>Wed, 15 Feb 2023 18:31:28 +0900</pubDate>
    </item>
    <item>
      <title>[MLops]. ONNX Runtime  문장 임베딩(sentence embedding) 속도 및 연산량 개선하기</title>
      <link>https://acdongpgm.tistory.com/333</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;443&quot; data-origin-height=&quot;114&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/4Ky0z/btrYzDZFxkr/dJiCBDJZs9r3XUWG5OMZBK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/4Ky0z/btrYzDZFxkr/dJiCBDJZs9r3XUWG5OMZBK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/4Ky0z/btrYzDZFxkr/dJiCBDJZs9r3XUWG5OMZBK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F4Ky0z%2FbtrYzDZFxkr%2FdJiCBDJZs9r3XUWG5OMZBK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;443&quot; height=&quot;114&quot; data-origin-width=&quot;443&quot; data-origin-height=&quot;114&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&amp;nbsp;&lt;/h2&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;BackGround&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;우리 회사는 SentenceTransformer를 기반으로 파인튜닝한 문장 임베딩 모델을 사용하고 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;하지만 모델의 크기가 커질 수록 임베딩 시간은 늘어나고 많은 연산량을 요구하게된다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그렇다고 모델의 크기를 줄이면 정확도가 떨어진다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;하지만 ONNX는 Inferance 속도를 최대한으로 높히면서 정확도 손실을 최소화하는 여러 가지 기능을 가지고있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Sentence-Transfomer 모델을 ONNX Runtime으로 변환하면서 얻었던 장점들을 정리해보고자 한다.&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style6&quot; /&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;What is ONNX ?&lt;/h2&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;300&quot; data-origin-height=&quot;168&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/zdtSY/btrYvnpWcLG/zjwDD6WzLsIxiBvpx2ga4K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/zdtSY/btrYvnpWcLG/zjwDD6WzLsIxiBvpx2ga4K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/zdtSY/btrYvnpWcLG/zjwDD6WzLsIxiBvpx2ga4K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FzdtSY%2FbtrYvnpWcLG%2FzjwDD6WzLsIxiBvpx2ga4K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;462&quot; height=&quot;259&quot; data-origin-width=&quot;300&quot; data-origin-height=&quot;168&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;ONNX는 Open Neural Network Exchange의 줄인 말로서&lt;br /&gt;이름과 같이 다른 DNN 프레임워크 환경(ex Tensorflow, PyTorch, etc..)에서 만들어진&lt;br /&gt;모델들을&amp;nbsp;서로&amp;nbsp;호환되게&amp;nbsp;사용할&amp;nbsp;수&amp;nbsp;있도록&amp;nbsp;만들어진&amp;nbsp;공유&amp;nbsp;플랫폼이다. &lt;br /&gt;ps.&amp;nbsp;ONNX&amp;nbsp;또한&amp;nbsp;DNN&amp;nbsp;프레임워크라고&amp;nbsp;부른다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;ONNX의 장점으로는 크게 두 가지가 있다.&lt;br /&gt;&lt;b&gt;1.상호 운용성&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;: onnx는 여러가지 딥러닝 프레임워크를 변환하여 추론 엔진으로 사용할 수있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;2. 하드웨어 엑세스&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;: onnx runtime을 사용하면 하드웨어 최적화에 더 쉽게 접근이 가능하다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Serif KR';&quot;&gt;&amp;nbsp;모델링은 익숙한 pytorch , keras로 하고 서빙에 최적화된 ONNX로 변환하여 서빙&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot; /&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;How to use Onnx Runtime?&lt;/h2&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Environment&lt;/h3&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;Python3.9&lt;br /&gt;Ubuntu20.04&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;1. Pytorch(Sbert)모델을 ONNX 모델로 Export하기&lt;/b&gt;&lt;/h4&gt;
&lt;pre id=&quot;code_1675836291484&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;from pathlib import Path
import transformers
from transformers.convert_graph_to_onnx import convert

convert(framework=&quot;pt&quot;, model=&quot;reppley/sentence-roberta-base&quot;,
        output=Path(&quot;onnx_models/sentence-roberta-base.onnx&quot;), opset=11)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;* 여기서 &quot;reppley/sentence-roberta-base&quot;는 SentenceTransformer 모델이다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실제로는 모델의 경로를 넣어주면되고 위 코드처럼 텍스트만 넣어주면 HuggingFace에서 모델을 찾아 로드해준다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;2. 양자화(Quantization) - 옵션&lt;/b&gt;&lt;/h4&gt;
&lt;pre id=&quot;code_1675836647273&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;from onnxruntime.quantization import quantize_dynamic, QuantType
quantize_dynamic(&quot;onnx_models/sentence-roberta-base.onnx&quot;, &quot;onnx_models/sentence-roberta-base_uint8.onnx&quot;, 
                 weight_type=QuantType.QUInt8)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;ONNX는 모델의 가중치를 Uint8 형식으로 양자화 할수있다. 양자화를 진행하면 모델 크기를 1/4로 줄일 수 있고&lt;/p&gt;
&lt;p data-ke-size=&quot;size14&quot;&gt;추론 속도도 훨씬 빨라진다. ( But 정확도 손실이 있다.)&amp;nbsp; &lt;br /&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;*Weight 값을 FP32 에서 UINT8(0~255)로 맵핑하는 기법&amp;nbsp;&lt;/b&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;610&quot; data-origin-height=&quot;64&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dMGgNN/btrYyU8H7xG/3vH6X77A8YekRwjonZXTE0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dMGgNN/btrYyU8H7xG/3vH6X77A8YekRwjonZXTE0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dMGgNN/btrYyU8H7xG/3vH6X77A8YekRwjonZXTE0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdMGgNN%2FbtrYyU8H7xG%2F3vH6X77A8YekRwjonZXTE0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;610&quot; height=&quot;64&quot; data-origin-width=&quot;610&quot; data-origin-height=&quot;64&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;3. 추론하기&lt;br /&gt;&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;3-1. 모델 불러오기&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1675837932401&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;from transformers import AutoTokenizer
from onnxruntime import InferenceSession
from sentence_transformers import SentenceTransformer
import torch

model = SentenceTransformer('reppley/sentence-roberta-base')

sess = InferenceSession(&quot;onnx_models/sentence-roberta-base.onnx&quot;,
                        providers=[&quot;CPUExecutionProvider&quot;])

sess_uint8 = InferenceSession(&quot;onnx_models/sentence-roberta-base_uint8.onnx&quot;,
                        providers=[&quot;CPUExecutionProvider&quot;])&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;비교를 위해 기존 Sbert 모델, ONNX모델 , ONNX 양자화 모델 3개를 불러왔다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;3-2. 풀링 함수&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1675838000554&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;def mean_pooling(model_output, attention_mask):
    model_output = torch.from_numpy(model_output[0])
    # First element of model_output contains all token embeddings
    token_embeddings = model_output
    attention_mask = torch.from_numpy(attention_mask)
    input_mask_expanded = attention_mask.unsqueeze(
        -1).expand(token_embeddings.size())
    sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
    sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
    return sum_embeddings / sum_mask, input_mask_expanded, sum_mask&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;SentenceTransfomer 모델은 자동으로 풀링을해서 임베딩해주지만 ONNX 모델은 BERT모델과 비슷하기때문에&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;둘의 임베딩 값을 맞추려면 ONNX 모델 결과의 풀링을 거쳐야한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style6&quot; /&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Model Test&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;기존 SentenceTransformer 모델&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1675838133025&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;%%timeit
query = &quot;안녕하세요&quot;
model.encode(query, convert_to_tensor=True, device='cpu')[:5]&lt;/code&gt;&lt;/pre&gt;
&lt;pre class=&quot;python&quot; data-ke-language=&quot;python&quot;&gt;&lt;code&gt;12.6 ms &amp;plusmn; 476 &amp;micro;s per loop (mean &amp;plusmn; std. dev. of 7 runs, 100 loops each)
tensor([ 0.2339, -0.1364,  0.6490, -0.2782, -0.1374])&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt; &lt;b&gt;SentenceTransformer&amp;nbsp; - &lt;/b&gt;임베딩 시간 평균(100개) : 0.12 초&amp;nbsp;&amp;nbsp;&lt;/b&gt;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style6&quot; /&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;ONNX 변환 모델&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1675838251429&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;%%timeit
query = &quot;안녕하세요&quot;
model_inputs = tokenizer(query, return_tensors=&quot;pt&quot;)
inputs_onnx = {k: v.cpu().detach().numpy() for k, v in model_inputs.items()}
mean_pooling(sess.run(None, inputs_onnx),inputs_onnx['attention_mask'])[0][0][:5]&lt;/code&gt;&lt;/pre&gt;
&lt;pre class=&quot;python&quot; data-ke-language=&quot;python&quot;&gt;&lt;code&gt;7.97 ms &amp;plusmn; 84.2 &amp;micro;s per loop (mean &amp;plusmn; std. dev. of 7 runs, 100 loops each)
tensor([ 0.2339, -0.1364,  0.6490, -0.2782, -0.1374])&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;ONNX - 임베딩 시간 평균(100걔) : 0.08초&lt;/b&gt;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style6&quot; /&gt;
&lt;pre id=&quot;code_1675838360469&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;%%timeit
query = &quot;안녕하세요&quot;
model_inputs = tokenizer(query, return_tensors=&quot;pt&quot;)
inputs_onnx = {k: v.cpu().detach().numpy() for k, v in model_inputs.items()}
mean_pooling(sess_uint8.run(None, inputs_onnx),inputs_onnx['attention_mask'])[0][0][:5]&lt;/code&gt;&lt;/pre&gt;
&lt;pre class=&quot;python&quot; data-ke-language=&quot;python&quot;&gt;&lt;code&gt;2.66 ms &amp;plusmn; 128 &amp;micro;s per loop (mean &amp;plusmn; std. dev. of 7 runs, 100 loops each)
tensor([ 0.2372, -0.1313,  0.6409, -0.2680, -0.1305])&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;ONNX Uint8 - 임베딩 시간 평균(100개) : 0.02초&lt;/b&gt;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style6&quot; /&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Conclusion&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;ONNX Runtime을 사용해서 추론 속도를 엄청나게(&lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;최대 6배&lt;/b&gt;&lt;/span&gt;) 감소시켰다. 하지만 양자화를 사용할 경우 결과값이 달라지는 것을 볼수있다. 정확도 감소가 조금 있을 것으로 예상되지만 그래도 속도 차이가 많이 나서 양자화 모델을 최종 서빙모델로 선정했다. 속도가 크게 중요하지 않은 Task에서는 ONNX로 모델만 변환해서 사용해도 좋을 것이다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Reference&lt;/h2&gt;
&lt;blockquote data-ke-style=&quot;style2&quot;&gt;https://learn.microsoft.com/ko-kr/windows/ai/windows-ml/tutorials/pytorch-convert-model&lt;br /&gt;https://beeny-ds.tistory.com/22&lt;br /&gt;https://onnx.ai/&lt;br /&gt;https://www.youtube.com/watch?v=MCafgeqWMhQ&lt;/blockquote&gt;</description>
      <category>MLops</category>
      <category>MLOps</category>
      <category>Model Serving</category>
      <category>onnx</category>
      <category>onnx runtime</category>
      <category>Quantizer</category>
      <category>sbert</category>
      <category>SentenceTransformer</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/333</guid>
      <comments>https://acdongpgm.tistory.com/333#entry333comment</comments>
      <pubDate>Wed, 8 Feb 2023 14:41:13 +0900</pubDate>
    </item>
    <item>
      <title>[NLP]. 한국어 존댓말/반말 분류모델 (formal classifier)</title>
      <link>https://acdongpgm.tistory.com/332</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://acdongpgm.tistory.com/237&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;2021.10.14 - [Data Science/NLP] - [전처리]. 한국어 존댓말과 반말을 구별하는 방법(feat. komoran)&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1675415843746&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;[전처리]. 한국어 존댓말과 반말을 구별하는 방법(feat. komoran)&quot; data-og-description=&quot;한국어는 영어와 다르게 존댓말(높힘말)과 반말(낮춤말)이 존재한다. 그래서 존댓말을 반말로 바꿔주고 반말을 존댓말로 바꿔주는 모델이 있으면 좋겠지만 (실제로 연구가 많이 진행되었지만 &quot; data-og-host=&quot;acdongpgm.tistory.com&quot; data-og-source-url=&quot;https://acdongpgm.tistory.com/237&quot; data-og-url=&quot;https://acdongpgm.tistory.com/237&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/kCfgu/hyRtWlk7sx/ccCJfKWqbXvki5AEk0wyO1/img.png?width=570&amp;amp;height=448&amp;amp;face=0_0_570_448,https://scrap.kakaocdn.net/dn/cOTyFr/hyRtZCoWz1/iAM9ZSJrvlftVVaa7arFP1/img.png?width=570&amp;amp;height=448&amp;amp;face=0_0_570_448,https://scrap.kakaocdn.net/dn/ujZI7/hyRt0adJ6n/barTQjFIBkW9HzLzSq5T5K/img.png?width=750&amp;amp;height=859&amp;amp;face=0_0_750_859&quot;&gt;&lt;a href=&quot;https://acdongpgm.tistory.com/237&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://acdongpgm.tistory.com/237&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/kCfgu/hyRtWlk7sx/ccCJfKWqbXvki5AEk0wyO1/img.png?width=570&amp;amp;height=448&amp;amp;face=0_0_570_448,https://scrap.kakaocdn.net/dn/cOTyFr/hyRtZCoWz1/iAM9ZSJrvlftVVaa7arFP1/img.png?width=570&amp;amp;height=448&amp;amp;face=0_0_570_448,https://scrap.kakaocdn.net/dn/ujZI7/hyRt0adJ6n/barTQjFIBkW9HzLzSq5T5K/img.png?width=750&amp;amp;height=859&amp;amp;face=0_0_750_859');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;[전처리]. 한국어 존댓말과 반말을 구별하는 방법(feat. komoran)&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;한국어는 영어와 다르게 존댓말(높힘말)과 반말(낮춤말)이 존재한다. 그래서 존댓말을 반말로 바꿔주고 반말을 존댓말로 바꿔주는 모델이 있으면 좋겠지만 (실제로 연구가 많이 진행되었지만&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;acdongpgm.tistory.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;1. 개발 배경&lt;/b&gt;&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;오래전에 존댓말 , 반말을 한국어 형태소 분석기로 분류하는 간단한 방법을 소개했다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;하지만 이 방법을 실제로 적용하려 했더니, 많은 부분에서 오류가 발생하였다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;예를 들면)&lt;/p&gt;
&lt;pre id=&quot;code_1675416009353&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;'저번에 교수님께서 자료 가져오라했는데 기억나?'&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;라는 문구를 &lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;&quot;께서&quot;&lt;/b&gt;&lt;/span&gt;라는 존칭때문에 전체문장을 존댓말로 판단하는 오류가 많이 발생했다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그래서 이번에 딥러닝 모델을 만들고 그 과정을 공유해보고자한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;2. 데이터 세트&lt;/b&gt;&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 데이터세트은 두 가지를 사용했다.&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;스마일게이트 말투 데이터 세트(korean SmileStyle Dataset)&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;:&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;a href=&quot;https://github.com/smilegate-ai/korean_smile_style_dataset&quot;&gt;https://github.com/smilegate-ai/korean_smile_style_dataset&lt;/a&gt;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;a id=&quot;user-content-ai-허브-감성-대화-말뭉치&quot; href=&quot;https://github.com/alswhddh/formal_classifier#ai-%ED%97%88%EB%B8%8C-%EA%B0%90%EC%84%B1-%EB%8C%80%ED%99%94-%EB%A7%90%EB%AD%89%EC%B9%98&quot; aria-hidden=&quot;true&quot;&gt;&lt;/a&gt;AI 허브 (감성 대화 말뭉치)&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;:&lt;span&gt; &lt;a href=&quot;https://www.aihub.or.kr/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.aihub.or.kr/&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;스마일게이트 데이터셋은 존댓말 분류모델에 가장 적합한 데이터가 많이 있었다 하지만,&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;딥러닝 모델을 학습하기엔 양이 부족하다 판단하여 AI 허브 데이터를 학습데이터로 추가하였다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot; /&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;3. 전처리&lt;/b&gt;&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;스마일게이트 데이터 중에서&lt;/p&gt;
&lt;pre id=&quot;code_1675416417528&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;formal_cols = ['formal', 'gentle']
informal_cols = ['informal', 'chat', 'enfp', 'sosim', 'choding', 'joongding']&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;존댓말(formal), 신사(gentle) 칼럼을 존댓말로 치환했다,&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;반말(informal), 채팅(chat), 엔프피(enfp), 소심한 성격(sosim), 초딩(choding), 중딩(joongding)을 반말로 치환했다.&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;감성 대화 말뭉치는 사람과 챗봇의 대화 데이터가 존재하는데.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;사람은 전부 반말이고 챗봇은 전부 존댓말이라 모델을 학습시키기에는 적합했다.&lt;/p&gt;
&lt;pre id=&quot;code_1675416902484&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;['시스템응답1', '시스템응답2', '시스템응답3', '시스템응답4']
['사람문장1', '사람문장2', '사람문장3', '사람문장4']&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;데이터 예시&lt;/b&gt;&lt;/p&gt;
&lt;table style=&quot;border-collapse: collapse; width: 55.349%; height: 117px;&quot; border=&quot;1&quot; data-ke-align=&quot;alignLeft&quot;&gt;
&lt;tbody&gt;
&lt;tr style=&quot;height: 17px;&quot;&gt;
&lt;td style=&quot;width: 87.3607%; text-align: center; height: 17px;&quot;&gt;&lt;b&gt;sentence&lt;/b&gt;&lt;/td&gt;
&lt;td style=&quot;width: 24.9035%; height: 17px;&quot;&gt;&lt;b&gt;label&lt;/b&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 20px;&quot;&gt;
&lt;td style=&quot;width: 87.3607%; height: 20px;&quot;&gt;공부를&amp;nbsp;열심히&amp;nbsp;해도&amp;nbsp;열심히&amp;nbsp;한&amp;nbsp;만큼&amp;nbsp;성적이&amp;nbsp;잘&amp;nbsp;나오지&amp;nbsp;않아&lt;/td&gt;
&lt;td style=&quot;width: 24.9035%; height: 20px;&quot;&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 20px;&quot;&gt;
&lt;td style=&quot;width: 87.3607%; height: 20px;&quot;&gt;아들에게&amp;nbsp;보내는&amp;nbsp;문자를&amp;nbsp;통해&amp;nbsp;관계가&amp;nbsp;회복되길&amp;nbsp;바랄게요&lt;/td&gt;
&lt;td style=&quot;width: 24.9035%; height: 20px;&quot;&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 20px;&quot;&gt;
&lt;td style=&quot;width: 87.3607%; height: 20px;&quot;&gt;참&amp;nbsp;열심히&amp;nbsp;사신&amp;nbsp;보람이&amp;nbsp;있으시네요&lt;/td&gt;
&lt;td style=&quot;width: 24.9035%; height: 20px;&quot;&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 20px;&quot;&gt;
&lt;td style=&quot;width: 87.3607%; height: 20px;&quot;&gt;나도&amp;nbsp;스시&amp;nbsp;좋아함&amp;nbsp;이번&amp;nbsp;달부터&amp;nbsp;영국&amp;nbsp;갈&amp;nbsp;듯&lt;/td&gt;
&lt;td style=&quot;width: 24.9035%; height: 20px;&quot;&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 20px;&quot;&gt;
&lt;td style=&quot;width: 87.3607%; height: 20px;&quot;&gt;본부장님이&amp;nbsp;내가&amp;nbsp;할&amp;nbsp;수&amp;nbsp;없는&amp;nbsp;업무를&amp;nbsp;계속&amp;nbsp;주셔서&amp;nbsp;힘들어&lt;/td&gt;
&lt;td style=&quot;width: 24.9035%; height: 20px;&quot;&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;총 데이터 분포&lt;/b&gt;&lt;/p&gt;
&lt;table style=&quot;border-collapse: collapse; width: 31.8605%; height: 102px;&quot; border=&quot;1&quot; data-ke-align=&quot;alignLeft&quot;&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style=&quot;width: 33.3333%; text-align: center;&quot;&gt;&lt;b&gt;label&lt;/b&gt;&lt;/td&gt;
&lt;td style=&quot;width: 33.3333%; text-align: center;&quot;&gt;&lt;b&gt;train&lt;/b&gt;&lt;/td&gt;
&lt;td style=&quot;width: 33.3333%; text-align: center;&quot;&gt;&lt;b&gt;test&lt;/b&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;width: 33.3333%;&quot;&gt;0&lt;/td&gt;
&lt;td style=&quot;width: 33.3333%;&quot;&gt;133,430&lt;/td&gt;
&lt;td style=&quot;width: 33.3333%;&quot;&gt;34,908&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;width: 33.3333%;&quot;&gt;1&lt;/td&gt;
&lt;td style=&quot;width: 33.3333%;&quot;&gt;112,828&lt;/td&gt;
&lt;td style=&quot;width: 33.3333%;&quot;&gt;29,839&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;전처리 코드 : &lt;a href=&quot;https://github.com/alswhddh/formal_classifier/blob/main/data_engineering/get_train_data.py&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://github.com/alswhddh/formal_classifier/blob/main/data_engineering/get_train_data.py&lt;/a&gt;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot; /&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;4. 학습&lt;/b&gt;&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;학습은 허깅페이스와 파이토치 라이트닝코드를 상속받아 사용했고 사전 학습 모델(Pretrained model)은&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;beomi/kcbert-base를 사용하였다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;GitHub :&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;a href=&quot;https://github.com/Beomi/KcBERT&quot;&gt;https://github.com/Beomi/KcBERT&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;HuggingFace :&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;a href=&quot;https://huggingface.co/beomi/kcbert-base&quot;&gt;https://huggingface.co/beomi/kcbert-base&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;처음에는 klue/roberta 모델을 사용했고 99%의 정확도를 기록했다, 하지만 실제로 예측했을때 결과가 이상하여&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;beomi/kcbert-base로 변경했고 변경 후에는 실제 테스트에서도 높은 성능을 보였다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&lt;b&gt;5. 결과&lt;/b&gt;&lt;/h2&gt;
&lt;table style=&quot;border-collapse: collapse; width: 27.907%; height: 69px;&quot; border=&quot;1&quot; data-ke-align=&quot;alignLeft&quot;&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style=&quot;width: 50%;&quot;&gt;val_loss&lt;/td&gt;
&lt;td style=&quot;width: 50%;&quot;&gt;0.0051&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;width: 50%;&quot;&gt;Accuarcy&lt;/td&gt;
&lt;td style=&quot;width: 50%;&quot;&gt;0.997&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;&lt;span style=&quot;letter-spacing: 0px;&quot;&gt;테스트 결과&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1675417276426&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;저번에 교수님께서 자료 가져오라하셨는데 기억나세요? : 존댓말입니다. ( 확률 99.19% )
저번에 교수님께서 자료 가져오라했는데 기억나? : 반말입니다. ( 확률 92.86% )&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;전체 소스 코드 : &lt;a href=&quot;https://github.com/alswhddh/formal_classifier&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://github.com/alswhddh/formal_classifier&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;감사합니다.&lt;/p&gt;</description>
      <category>Machine learning/NLP</category>
      <category>formal classifier</category>
      <category>honorific classifier</category>
      <category>Korean</category>
      <category>딥러닝</category>
      <category>존댓말 반말 분류</category>
      <category>한국어 반말</category>
      <category>한국어 존댓말</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/332</guid>
      <comments>https://acdongpgm.tistory.com/332#entry332comment</comments>
      <pubDate>Fri, 3 Feb 2023 18:43:13 +0900</pubDate>
    </item>
    <item>
      <title>[NLP] KakaoGPT 사용해서 존댓말/반말 변환하기</title>
      <link>https://acdongpgm.tistory.com/329</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;참고 :&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://developers.kakao.com/docs/latest/ko/kogpt/rest-api#sample&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://developers.kakao.com/docs/latest/ko/kogpt/rest-api#sample&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre id=&quot;code_1672799805291&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;# coding=utf8
# REST API 호출에 필요한 라이브러리
import requests
import json

# [내 애플리케이션] &amp;gt; [앱 키] 에서 확인한 REST API 키 값 입력
REST_API_KEY = '{KEY}'

# KoGPT API 호출을 위한 메서드 선언
# 각 파라미터 기본값으로 설정
def kogpt_api(prompt, max_tokens = 1, temperature = 1.0, top_p = 1.0, n = 1):
    r = requests.post(
        'https://api.kakaobrain.com/v1/inference/kogpt/generation',
        json = {
            'prompt': prompt,
            'max_tokens': max_tokens,
            'temperature': temperature,
            'top_p': top_p,
            'n': n
        },
        headers = {
            'Authorization': 'KakaoAK ' + REST_API_KEY,
            'Content-Type': 'application/json'
        }
    )
    # 응답 JSON 형식으로 변환
    response = json.loads(r.content)
    return response&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;사용하기&lt;/p&gt;
&lt;pre id=&quot;code_1672799857792&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;prompt='''주어진 문장을 존댓말 문장으로 바꿔주세요.

문장:하지마!
존댓말:하지 말아주세요.

문장:나랑 같이 놀러가자
존댓말:저랑 같이 놀러가지 않으실래요?

문장:배고파 밥줘
존댓말:배가고픈데 밥을 먹어도 될까요?

문장:그거 재밌어?
존댓말:그것은 재미 있나요?

문장:뭐하는거야 지금
존댓말:지금 무엇을 하시는 건가요?

문장:당장 제자리에 돌려놔
존댓말:'''
response = kogpt_api(prompt, max_tokens=10, temperature=0.7)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;br /&gt;결과&lt;/p&gt;
&lt;pre class=&quot;scheme&quot;&gt;&lt;code&gt;'generations': [{'text': '당장 제자리에 돌려 놓으세요.\n\n문장', 'tokens': 10}]&lt;/code&gt;&lt;/pre&gt;</description>
      <category>Machine learning/NLP</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/329</guid>
      <comments>https://acdongpgm.tistory.com/329#entry329comment</comments>
      <pubDate>Wed, 4 Jan 2023 11:38:39 +0900</pubDate>
    </item>
    <item>
      <title>[Python] .파일 실행시 인자값 전달하기 (argparse)</title>
      <link>https://acdongpgm.tistory.com/328</link>
      <description>&lt;pre id=&quot;code_1672793768683&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;import argparse​&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;Sbert.py&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1672793748064&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;if __name__ == &quot;__main__&quot;:
    parser = argparse.ArgumentParser()

    parser.add_argument(&quot;--model_name&quot;, type=str)
    parser.add_argument(&quot;--batch_size&quot;, type=int, default=32)
    parser.add_argument(&quot;--num_epochs&quot;, type=int, default=4)
    parser.add_argument(&quot;--eval_steps&quot;, type=int, default=100000)
    parser.add_argument(&quot;--gpu_id&quot;, type=str, default=&quot;0&quot;)
    
    args = parser.parse_args()
    
    config = {
        &quot;model_name&quot;: args.model_name,
        &quot;train_batch_size&quot;: args.batch_size,
        &quot;num_epochs&quot;: args.num_epochs,
        &quot;eval_steps&quot;: args.eval_steps,
        &quot;model_save_path&quot;: &quot;output/&quot;,
        &quot;data_path&quot;: &quot;./data/total_train/&quot;,
        &quot;gpu_id&quot;: args.gpu_id
    }

    sbert_train = SbertTrain(**config)
    sbert_train.retrain()
    sbert_train.evaluate()&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;실행 :&amp;nbsp;&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1672793799987&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;python3 sbert_train.py --model_name &quot;monologg/koelectra-small-v3-discriminator&quot; --batch_size 32 --num_epochs 4 --eval_steps 100000 --gpu_id &quot;1&quot;&lt;/code&gt;&lt;/pre&gt;</description>
      <category>Python</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/328</guid>
      <comments>https://acdongpgm.tistory.com/328#entry328comment</comments>
      <pubDate>Wed, 4 Jan 2023 09:57:06 +0900</pubDate>
    </item>
    <item>
      <title>[위로봇 프로젝트]. 오복이 육아일기 3일차 - 데이터 추가하기</title>
      <link>https://acdongpgm.tistory.com/324</link>
      <description>&lt;p data-ke-size=&quot;size16&quot; style=&quot;text-align: left;&quot;&gt;&lt;a href=&quot;https://acdongpgm.tistory.com/314&quot; target=&quot;_blank&quot;&gt;&lt;span&gt;2022.12.06 - [Chatbot] - [위로봇 프로젝트]. 오복이 육아일기 2일차 - 설계&lt;/span&gt;&lt;/a&gt;&lt;/p&gt;
&lt;figure data-ke-type=&quot;opengraph&quot; data-og-title=&quot;[위로봇 프로젝트]. 오복이 육아일기 2일차 - 설계&quot; data-ke-align=&quot;alignCenter&quot; data-og-description=&quot;2022.12.06 - [Chatbot] - [위로봇 프로젝트]. 오복이 육아일기 1일차 - 소개 위로봇 오복이의 프로세스는 아주 간단하게 설계되어있습니다. 웹 서버는 Python 언어를 기반으로한 FastAPI를 사용했습니다. &quot; data-og-host=&quot;acdongpgm.tistory.com&quot; data-og-source-url=&quot;https://acdongpgm.tistory.com/314&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/da58x4/hyQ1mxSs7k/YWXZMCzA7ySCYBryVrLAkK/img.png?width=800&amp;amp;height=409&amp;amp;face=0_0_800_409,https://scrap.kakaocdn.net/dn/JVEq1/hyQ3UGtNBz/7W9knTWMYem5xPKotwJrjk/img.png?width=800&amp;amp;height=409&amp;amp;face=0_0_800_409,https://scrap.kakaocdn.net/dn/bEJvdl/hyQ1j8YAzP/2ReCCsjKkjsPSGAbbVPn70/img.png?width=1804&amp;amp;height=924&amp;amp;face=0_0_1804_924&quot; data-og-url=&quot;https://acdongpgm.tistory.com/314&quot;&gt;
 &lt;a href=&quot;https://acdongpgm.tistory.com/314&quot; target=&quot;_blank&quot; data-source-url=&quot;https://acdongpgm.tistory.com/314&quot;&gt;
  &lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/da58x4/hyQ1mxSs7k/YWXZMCzA7ySCYBryVrLAkK/img.png?width=800&amp;amp;height=409&amp;amp;face=0_0_800_409,https://scrap.kakaocdn.net/dn/JVEq1/hyQ3UGtNBz/7W9knTWMYem5xPKotwJrjk/img.png?width=800&amp;amp;height=409&amp;amp;face=0_0_800_409,https://scrap.kakaocdn.net/dn/bEJvdl/hyQ1j8YAzP/2ReCCsjKkjsPSGAbbVPn70/img.png?width=1804&amp;amp;height=924&amp;amp;face=0_0_1804_924')&quot;&gt; 
  &lt;/div&gt;
  &lt;div class=&quot;og-text&quot;&gt;
   &lt;p class=&quot;og-title&quot;&gt;[위로봇 프로젝트]. 오복이 육아일기 2일차 - 설계&lt;/p&gt;
   &lt;p class=&quot;og-desc&quot;&gt;2022.12.06 - [Chatbot] - [위로봇 프로젝트]. 오복이 육아일기 1일차 - 소개 위로봇 오복이의 프로세스는 아주 간단하게 설계되어있습니다. 웹 서버는 Python 언어를 기반으로한 FastAPI를 사용했습니다. &lt;/p&gt;
   &lt;p class=&quot;og-host&quot;&gt;acdongpgm.tistory.com&lt;/p&gt;
  &lt;/div&gt;&lt;/a&gt;
&lt;/figure&gt;
&lt;hr data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot;&gt;
&lt;h3 style=&quot;text-align: left;&quot; data-ke-size=&quot;size23&quot;&gt;&lt;b&gt; 챗봇 사용 데이터&lt;/b&gt;&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot; style=&quot;text-align: left;&quot;&gt;1. 송영숙 님의 챗봇 데이터&lt;br&gt;&lt;b&gt;&lt;a href=&quot;https://github.com/songys/Chatbot_data&quot; target=&quot;_blank&quot;&gt;&lt;span&gt;https://github.com/songys/Chatbot_data&lt;/span&gt;&lt;/a&gt;&lt;/b&gt;&lt;br&gt;2. AI-HUB 웰니스 상담 데이터&lt;br&gt;&lt;b&gt;&lt;a href=&quot;https://aihub.or.kr/opendata/keti-data/recognition-laguage/KETI-02-006&quot; target=&quot;_blank&quot;&gt;&lt;span&gt;https://aihub.or.kr/opendata/keti-data/recognition-laguage/KETI-02-006&lt;/span&gt;&lt;/a&gt;&lt;/b&gt;&lt;br&gt;3. AI-HUB 감성대화 말뭉치&lt;br&gt;&lt;a href=&quot;https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&amp;amp;topMenu=100&amp;amp;aihubDataSe=realm&amp;amp;dataSetSn=86&quot; target=&quot;_blank&quot;&gt;&lt;span&gt;https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&amp;amp;topMenu=100&amp;amp;aihubDataSe=realm&amp;amp;dataSetSn=86&lt;/span&gt;&lt;/a&gt; &lt;br&gt;위 3가지의 데이터를 종합해서 사용했 대답이 질문으로 끝나는 경우는 제거 *싱글턴 대화이기 때문&lt;br&gt; &lt;br&gt; &lt;br&gt;위 데이터를 합쳐서 질문과 답변만 남겨서 하나의 파일로 만들었다.&lt;br&gt;&lt;b&gt;거의 10만건의 QA 쌍 데이터를 확보 &lt;span style=&quot;color: #EE2323;&quot;&gt;*데이터셋은 직접 다운받으시길 바랍니다.&lt;/span&gt;&lt;/b&gt;&lt;br&gt; &lt;br&gt;&lt;b&gt;&lt;span style=&quot;color: #F89009;&quot;&gt;데이터 예시)&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;
&lt;table style=&quot;border-collapse: collapse; width: 100%;&quot; border=&quot;1&quot; data-ke-style=&quot;style2&quot; data-ke-align=&quot;alignLeft&quot;&gt;
 &lt;tbody&gt;
  &lt;tr style=&quot;height: 20px;&quot;&gt;
   &lt;td style=&quot;width: 50%; height: 20px; text-align: justify;&quot;&gt;질문&lt;/td&gt;
   &lt;td style=&quot;width: 50%; height: 20px; text-align: justify;&quot;&gt;답변&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr style=&quot;height: 20px;&quot;&gt;
   &lt;td style=&quot;width: 50%; height: 20px; text-align: justify;&quot;&gt;상실감이 느껴지고 내가 진 것 같은 기분이 들어요.&lt;/td&gt;
   &lt;td style=&quot;width: 50%; height: 20px; text-align: justify;&quot;&gt;마음이 헛헛해서 힘드셨군요. 돌아보면 즐거웠던 일, 행복했던 일, 뿌듯했던 일이 많을 거예요. 너무 상심 마세요.&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr style=&quot;height: 18px;&quot;&gt;
   &lt;td style=&quot;width: 50%; height: 18px; text-align: justify;&quot;&gt;내가 한심스럽게 느껴지니까 그냥 한숨만 나오고 자꾸 눈물을 쏟게 되는것 같아&lt;/td&gt;
   &lt;td style=&quot;width: 50%; height: 18px; text-align: justify;&quot;&gt;자존감이 떨어져 속상하고 고민이 많이 드셨겠어요.&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr style=&quot;height: 18px;&quot;&gt;
   &lt;td style=&quot;width: 50%; height: 18px; text-align: justify;&quot;&gt;그렇게 앉아서 몇 시간을 울었거든요.&lt;/td&gt;
   &lt;td style=&quot;width: 50%; height: 18px; text-align: justify;&quot;&gt;00님이 걱정되네요. 얼른 기분이 좀 나아지셔야 할 텐데 말이에요.&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr style=&quot;height: 18px;&quot;&gt;
   &lt;td style=&quot;width: 50%; height: 18px; text-align: justify;&quot;&gt;한 발자국도 못 움직일 것 같고 온몸에 힘이 빠지고…&lt;/td&gt;
   &lt;td style=&quot;width: 50%; height: 18px; text-align: justify;&quot;&gt;많이 힘드셨겠군요. 휴식을 충분히 취하셔야 할 것 같아요.&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr style=&quot;height: 18px;&quot;&gt;
   &lt;td style=&quot;width: 50%; height: 18px; text-align: justify;&quot;&gt;그럼 또 나는 엄청 걱정이 되는 거야.&lt;/td&gt;
   &lt;td style=&quot;width: 50%; height: 18px; text-align: justify;&quot;&gt;신경이 예민해지셨군요. 00님이 받는 스트레스가 컸을 것 같아 저 또한 걱정스러워요.&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr style=&quot;height: 18px;&quot;&gt;
   &lt;td style=&quot;width: 50%; height: 18px; text-align: justify;&quot;&gt;조금밖에 못 자서 미치기 일 보 직전이야&lt;/td&gt;
   &lt;td style=&quot;width: 50%; height: 18px; text-align: justify;&quot;&gt;수면의 질이 떨어지면 피곤이 쌓여서 낮 생활에도 안 좋은 영향을 끼치잖아요. 00님 힘드셨겠어요.&lt;/td&gt;
  &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;p data-ke-size=&quot;size16&quot; style=&quot;text-align: left;&quot;&gt; &lt;br&gt;좋은 대답도 있지만 성의없는 답변도 많다.&lt;br&gt;하지만 이런 데이터가 존재하기 때문에 프로젝트도 가능한 것!&lt;br&gt;인공지능 프로젝트는 데이터로 부터 아이디어가 나오는게 대부분인 것 같다.&lt;br&gt;데이터가 없으면 말짱꽝..&lt;br&gt; &lt;/p&gt;
&lt;hr data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot;&gt;
&lt;p data-ke-size=&quot;size16&quot; style=&quot;text-align: left;&quot;&gt;이제 전처리 완료된 데이터를 엘라스틱 서치 검색엔진에 업로드 해주어야한다.&lt;br&gt;엘라스틱서치는 데이터베이스로써의 기능과 검색엔진 기능 두 가지를 동시에 할 수 있다.&lt;br&gt; &lt;br&gt;엘라스틱서치 검색엔진은 TF-IDF 의 길이 보정 버전인 BM25 알고리즘을 통해 검색한다.&lt;br&gt;간단하게 설명하면 중요키워드 기반으로 검색의 품질을 높힌다고 볼 수 있다.&lt;br&gt; &lt;br&gt;딱히 구현할 필요는 없고 DB의 스키마처럼 엘라스틱서치 맵핑을 정의해서 데이터를 Index(Table과 동일)에 넣어주기만하면&lt;br&gt;한국어 노리(Nori) 토크나이저가 자동으로 토큰으로 분리해서 저장한다.&lt;br&gt; &lt;br&gt;엘라스틱 서치 버전은 현시점 가장 최신버전인&lt;br&gt;8.5.2 버전을 사용했다.&lt;br&gt; &lt;br&gt;&lt;a href=&quot;https://acdongpgm.tistory.com/259&quot; target=&quot;_blank&quot;&gt;&lt;span&gt;2022.04.01 - [ElasticSearch] - [ElasticSearch] . ES , KB Download , setting - (2)&lt;/span&gt;&lt;/a&gt;&lt;br&gt;설치 가이드는 이전 글 참고&lt;br&gt; &lt;br&gt;&lt;b&gt;mapping.json&lt;/b&gt;&lt;/p&gt;
&lt;pre data-ke-type=&quot;codeblock&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot;&gt;&lt;code&gt;{
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;settings&quot;: {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;number_of_shards&quot;: 5,
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;number_of_replicas&quot;: 1,
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;index&quot;: {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;analysis&quot;: {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;analyzer&quot;: {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;nori_token_analyzer&quot;: {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;type&quot;: &quot;custom&quot;,
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;tokenizer&quot;: &quot;nori_base_tokenizer&quot;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;},
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;tokenizer&quot;: {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;nori_base_tokenizer&quot;: {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;type&quot;: &quot;nori_tokenizer&quot;,
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;decompound_mode&quot;: &quot;none&quot;,
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;discard_punctuation&quot;: false
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;},
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;mappings&quot;: {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;properties&quot;: {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;system&quot;: {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;type&quot;: &quot;text&quot;,
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;fields&quot;: {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;keyword&quot;: {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;type&quot;: &quot;keyword&quot;,
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;ignore_above&quot;: 512,
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;doc_values&quot;: false
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;},
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;user&quot;: {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;type&quot;: &quot;text&quot;,
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;fields&quot;: {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;keyword&quot;: {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;type&quot;: &quot;keyword&quot;,
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;ignore_above&quot;: 256
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;},
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;nori&quot;: {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;type&quot;: &quot;text&quot;,
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;analyzer&quot;: &quot;nori_token_analyzer&quot;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;},
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;idx&quot;: {
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;type&quot;: &quot;integer&quot;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}
}&lt;/code&gt;&lt;/pre&gt;
&lt;hr data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot;&gt;
&lt;p data-ke-size=&quot;size16&quot; style=&quot;text-align: left;&quot;&gt;&lt;b&gt;upload.py&lt;/b&gt;&lt;/p&gt;
&lt;pre data-ke-type=&quot;codeblock&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot;&gt;&lt;code&gt;import pandas as pd
from pathlib import Path
import os
import json
from elasticsearch import Elasticsearch, helpers
from config import ELASTIC_HOST

BASE_DIR = Path(__file__).resolve().parent.parent
DATA_PATH = os.path.join(BASE_DIR,'data','base_datasets.xlsx')
MAPPING_PATH = os.path.join(BASE_DIR,'mapping.json')
MAPPING = json.load(open(MAPPING_PATH))

df = pd.read_excel(DATA_PATH)
df['idx'] = df.index

client = Elasticsearch(hosts=ELASTIC_HOST)

def filterKeys(document, use_these_keys):
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;return {key: document[key] for key in use_these_keys}

def doc_generator(df, index):
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;df_iter = df.iterrows()
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;cols = df.columns.to_list()
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;temp = list()

&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;for idx, document in df_iter:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;temp.append(
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;{
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;_index&quot;: index,
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;_id&quot;: f&quot;{idx}&quot;,
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&quot;_source&quot;: filterKeys(document, cols),
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;}
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;return temp

def upload(df, index):
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;client.indices.create(index=index, body=MAPPING)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;data = doc_generator(df, index)
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;helpers.bulk(client, data)

upload(df,&quot;chatbot&quot;)
client.close()&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot; style=&quot;text-align: left;&quot;&gt; &lt;br&gt;&lt;b&gt;데이터 업로드 확인&lt;/b&gt;&lt;/p&gt;
&lt;pre data-ke-type=&quot;codeblock&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot;&gt;&lt;code&gt;curl -GET http://localhost:9200/_cat/indices\?v&lt;/code&gt;&lt;/pre&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;729&quot; data-origin-height=&quot;46&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/blLLVm/btrUIm8tLbD/KKE2wjt3YJJM7deR88Fk8k/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/blLLVm/btrUIm8tLbD/KKE2wjt3YJJM7deR88Fk8k/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/blLLVm/btrUIm8tLbD/KKE2wjt3YJJM7deR88Fk8k/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FblLLVm%2FbtrUIm8tLbD%2FKKE2wjt3YJJM7deR88Fk8k%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;729&quot; height=&quot;46&quot; data-origin-width=&quot;729&quot; data-origin-height=&quot;46&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;

&lt;p data-ke-size=&quot;size16&quot; style=&quot;text-align: left;&quot;&gt; &lt;br&gt;&lt;b&gt;데이터 조회&lt;/b&gt;&lt;/p&gt;
&lt;pre data-ke-type=&quot;codeblock&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot;&gt;&lt;code&gt;curl -GET http://localhost:9200/chatbot/_doc/20 #20번째 데이터&lt;/code&gt;&lt;/pre&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;888&quot; data-origin-height=&quot;30&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bxYEpN/btrUHNyt6Jf/KzVDXGYu8Cm4eTL7PEV1Z0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bxYEpN/btrUHNyt6Jf/KzVDXGYu8Cm4eTL7PEV1Z0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bxYEpN/btrUHNyt6Jf/KzVDXGYu8Cm4eTL7PEV1Z0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbxYEpN%2FbtrUHNyt6Jf%2FKzVDXGYu8Cm4eTL7PEV1Z0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;888&quot; height=&quot;30&quot; data-origin-width=&quot;888&quot; data-origin-height=&quot;30&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;

&lt;p data-ke-size=&quot;size16&quot; style=&quot;text-align: left;&quot;&gt; &lt;/p&gt;</description>
      <category>Machine learning/Chatbot</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/324</guid>
      <comments>https://acdongpgm.tistory.com/324#entry324comment</comments>
      <pubDate>Mon, 26 Dec 2022 21:47:17 +0900</pubDate>
    </item>
    <item>
      <title>[PostgreSQL]. Python ORM(sqlalchemy)으로 데이터 삽입하기</title>
      <link>https://acdongpgm.tistory.com/323</link>
      <description>&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;create_table.py&lt;/b&gt;&lt;/h4&gt;
&lt;pre id=&quot;code_1671770316704&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;from sqlalchemy import MetaData, Table
from sqlalchemy import create_engine
import json
import os
from sqlalchemy import Integer, String, DateTime
from sqlalchemy import Column

meta = MetaData()

FILE = os.path.dirname(os.path.realpath(__file__))
CONFIG = json.load(open(FILE + '/pg_test_config.json'))

username = CONFIG['username']
password = CONFIG['password']
host = CONFIG['host']
port = CONFIG['port']
db_name = CONFIG['db_name']

URL = f'postgresql://{username}:{password}@{host}:{port}/{db_name}'

engine = create_engine(URL, client_encoding='utf8')

def create_korean_sns():
    students = Table(
        'korean_sns', meta,
        Column('index', Integer, primary_key=True),
        Column('session_id', Integer),
        Column('utterance', String(512)),
        Column('participantID', String(10)),
        Column('datetime', DateTime),
        Column('age', Integer),
        Column('gender', Integer),
        Column('topic', Integer),
    )
    meta.create_all(engine)


engine.dispose()&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot; /&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;models.py&lt;/b&gt;&lt;/h4&gt;
&lt;pre id=&quot;code_1671770528696&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;from sqlalchemy import Column
from sqlalchemy import Integer, String, DateTime
from sqlalchemy.orm import declarative_base

Base = declarative_base()

class KoreanSns(Base):
    __tablename__ = &quot;korean_sns&quot;
    index = Column(Integer, primary_key=True)
    session_id = Column(Integer)
    utterance = Column(String(512))
    participantID = Column(String(10))
    datetime = DateTime
    age = Column(Integer)
    gender = Column(Integer)
    topic = Column(Integer)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;insert.py&lt;/b&gt;&lt;/h4&gt;
&lt;pre id=&quot;code_1671770579988&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;from sqlalchemy import create_engine

from datetime import datetime
import pandas as pd
import os
import json
from tqdm import tqdm
from pathlib import Path

tqdm.pandas()

BASE_DIR = Path(__file__).resolve().parent.parent
data_path = os.path.join(BASE_DIR, 'result', 'koreanSns_sess_20220505.csv')
config_path = os.path.join(BASE_DIR, 'upload', 'pg_test_config.json')

CONFIG = json.load(open(config_path))

username = CONFIG['username']
password = CONFIG['password']
host = CONFIG['host']
port = CONFIG['port']
db_name = CONFIG['db_name']

URL = f'postgresql://{username}:{password}@{host}:{port}/{db_name}'
TABLE_NAME = 'korean_sns'

if __name__ == &quot;__main__&quot;:
    df = pd.read_csv(data_path,index_col=0)
    df.index = df.index + 1
    engine = create_engine(URL, client_encoding='utf8')

    # print(df.head())

    df['datetime'] = df['datetime'].progress_map(lambda x : datetime.strptime(x[2:], '%y-%m-%d %H:%M:%S'))
    df.to_sql(TABLE_NAME,con=engine,if_exists='replace',chunksize=1000,method='multi')
    engine.dispose()&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>RDBMS</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/323</guid>
      <comments>https://acdongpgm.tistory.com/323#entry323comment</comments>
      <pubDate>Fri, 23 Dec 2022 13:43:30 +0900</pubDate>
    </item>
    <item>
      <title>[NLP]. SentenceTransformer Tokenize 멀티턴 형식으로 수정하기</title>
      <link>https://acdongpgm.tistory.com/322</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre id=&quot;code_1671696004447&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;def origin_tokenize(self, texts: Union[List[str], List[Dict], List[Tuple[str, str]]]):
    &quot;&quot;&quot;
    Tokenizes the texts
    &quot;&quot;&quot;
    return self._first_module().tokenize(texts)

def tokenize(self, texts: Union[List[str], List[Dict], List[Tuple[str, str]]]):
    &quot;&quot;&quot;
    Tokenization is Mutiturn Utterance Custom
    안녕 [SEP] 뭐해 ㅋㅋㅋㅋㅋ [SEP] 나 집에서 넷플릭스 보고있지
    &quot;&quot;&quot;
    encoded_dict = self.origin_tokenize(texts)
    idx = []
    input_ids = encoded_dict['input_ids'][0].tolist()
    for i in range(len(input_ids)):
        if input_ids[i] == 3 and len(input_ids) - 1 != i:
            idx.append(i)

    token_type_id = []
    if len(idx) == 2:
        for i in range(len(input_ids)):
            if i &amp;lt;= idx[0]:
                token_type_id.append(0)
            elif i &amp;lt;= idx[1]:
                token_type_id.append(1)
            else:
                token_type_id.append(0)

        encoded_dict['token_type_ids'] = torch.unsqueeze(
            torch.tensor(token_type_id), 0)
    return encoded_dict&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre id=&quot;code_1671701084962&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;def custom_tokenizer(sent, MAX_LEN):
    encoded_dict = tokenizer.encode_plus(
      text = sent,
      add_special_tokens = True, # 시작점에 CLS, 끝점에 SEP가 추가된다.
      max_length = MAX_LEN,
      pad_to_max_length = True,
      return_attention_mask = True,
      truncation=True
    )
    
    idx = []
    input_ids = encoded_dict['input_ids'][0].tolist()
    for i in range(len(input_ids)):
        if input_ids[i] == 3 and len(input_ids) - 1 != i:
            idx.append(i)

    token_type_id = []
    if len(idx) == 2:
        for i in range(len(input_ids)):
            if i &amp;lt;= idx[0]:
                token_type_id.append(0)
            elif i &amp;lt;= idx[1]:
                token_type_id.append(1)
            else:
                token_type_id.append(0)

        encoded_dict['token_type_ids'] = torch.unsqueeze(
            torch.tensor(token_type_id), 0)
    return encoded_dict&lt;/code&gt;&lt;/pre&gt;</description>
      <category>Machine learning/NLP</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/322</guid>
      <comments>https://acdongpgm.tistory.com/322#entry322comment</comments>
      <pubDate>Thu, 22 Dec 2022 17:40:19 +0900</pubDate>
    </item>
    <item>
      <title>[NLP]. 텍스트 데이터 정제(이모지 , 특수문자, url , 한자 제거)</title>
      <link>https://acdongpgm.tistory.com/321</link>
      <description>&lt;pre class=&quot;python&quot;&gt;&lt;code&gt;import re
import emoji
from soynlp.normalizer import repeat_normalize

pattern = re.compile(f'[^ .,?!/@$%~％&amp;middot;&amp;sim;()\x00-\x7Fㄱ-ㅣ가-힣]+')
url_pattern = re.compile(
    r'https?:\/\/(www\.)?[-a-zA-Z0-9@:%._\+~#=]{1,256}\.[a-zA-Z0-9()]{1,6}\b([-a-zA-Z0-9()@:%_\+.~#?&amp;amp;//=]*)')

def clean(x): 
    x = pattern.sub(' ', x)
    x = emoji.replace_emoji(x, replace='') #emoji 삭제
    x = url_pattern.sub('', x)
    x = x.strip()
    x = repeat_normalize(x, num_repeats=2)
    return x&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고 :&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://github.com/Beomi/KcELECTRA&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://github.com/Beomi/KcELECTRA&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>Machine learning/NLP</category>
      <category>html 제거</category>
      <category>url 패턴 제거</category>
      <category>이모지</category>
      <category>이모지 제거</category>
      <category>특수문자 제거</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/321</guid>
      <comments>https://acdongpgm.tistory.com/321#entry321comment</comments>
      <pubDate>Wed, 21 Dec 2022 10:18:16 +0900</pubDate>
    </item>
    <item>
      <title>[리뷰]. 도커(Docker) 컨테이너 휴지 케이스 (feat.인카토스)</title>
      <link>https://acdongpgm.tistory.com/320</link>
      <description>&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;도커(Docker)는 컨테이너 관리 시스템이다.&lt;/b&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;개발자라면 도커를 모를 수 없을 것이다.&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;도커를 자주 접하다보니까&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;언제부터 입에 달라붙기 시작했다..&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;명령어를 너무 많이 쳐서 그런가 ㅋㅋ&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;캐릭터 및 로고도&amp;nbsp;&lt;/b&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;너무 친근해졌고 애정이가기 시작했다.(사랑의 시작..)&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;784&quot; data-origin-height=&quot;388&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/ccn2tv/btrTYBNNd0H/va7T6Z4RX2mkNPpCZ9EkoK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/ccn2tv/btrTYBNNd0H/va7T6Z4RX2mkNPpCZ9EkoK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/ccn2tv/btrTYBNNd0H/va7T6Z4RX2mkNPpCZ9EkoK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fccn2tv%2FbtrTYBNNd0H%2Fva7T6Z4RX2mkNPpCZ9EkoK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;540&quot; height=&quot;267&quot; data-origin-width=&quot;784&quot; data-origin-height=&quot;388&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size14&quot;&gt;&lt;i&gt;심지어 닉네임을 도커(Docker)로 한적도 있다.&lt;/i&gt;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style6&quot; /&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;이렇게 도커의 빠져들고있을때&lt;/b&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;사무실에도 사용할 수 있는 &lt;span style=&quot;color: #006dd7;&quot;&gt;도커 굿즈&lt;/span&gt;를 발견했다.&lt;/b&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;KakaoTalk_Photo_2022-12-19-22-35-08.jpeg&quot; data-origin-width=&quot;1440&quot; data-origin-height=&quot;810&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/nP0BT/btrT4ejVW0u/KCqgSXrtZYBDTnwp5rpHq0/img.jpg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/nP0BT/btrT4ejVW0u/KCqgSXrtZYBDTnwp5rpHq0/img.jpg&quot; data-alt=&quot;화분옆에도 잘 어울리는 도커 컨테이너&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/nP0BT/btrT4ejVW0u/KCqgSXrtZYBDTnwp5rpHq0/img.jpg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FnP0BT%2FbtrT4ejVW0u%2FKCqgSXrtZYBDTnwp5rpHq0%2Fimg.jpg&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;599&quot; height=&quot;337&quot; data-filename=&quot;KakaoTalk_Photo_2022-12-19-22-35-08.jpeg&quot; data-origin-width=&quot;1440&quot; data-origin-height=&quot;810&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;화분옆에도 잘 어울리는 도커 컨테이너&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;이거 있으면 도커 잘 쓸수 있을까?&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;KakaoTalk_Photo_2022-12-19-22-34-50.jpeg&quot; data-origin-width=&quot;1440&quot; data-origin-height=&quot;810&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bELYeo/btrT4qR4y0X/lhPv9eWnHtJWoq8koqYMdK/img.jpg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bELYeo/btrT4qR4y0X/lhPv9eWnHtJWoq8koqYMdK/img.jpg&quot; data-alt=&quot;딥러닝 PC 옆에서도 근엄한 도커 컨테이너 모습&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bELYeo/btrT4qR4y0X/lhPv9eWnHtJWoq8koqYMdK/img.jpg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbELYeo%2FbtrT4qR4y0X%2FlhPv9eWnHtJWoq8koqYMdK%2Fimg.jpg&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;643&quot; height=&quot;362&quot; data-filename=&quot;KakaoTalk_Photo_2022-12-19-22-34-50.jpeg&quot; data-origin-width=&quot;1440&quot; data-origin-height=&quot;810&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;딥러닝 PC 옆에서도 근엄한 도커 컨테이너 모습&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;우리는 공유 오피스를 사용해서 다른 사무실 직원들이&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;지나다니면서 볼 수도 있는데&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;개발 고수처럼 보여질 것 같아서 뭔가 뿌듯하네요.&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;사실은 주니어랍니다..&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;도커 컨터이너를 휴지케이스로 녹일 생각을하다니 정말 아이디어가 좋은 것 같습니다.&lt;/b&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;color: #006dd7;&quot;&gt;&lt;b&gt;성덕이 되어버렸네요 ㅋㅋ&lt;/b&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;색상은 화이트 추천드립니다.&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;조립식인데 조립도 편하고 분리도 간편해요.&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;주변에 개발자 친구한테 선물하면&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;아주 만족할거 같아요 저처럼 ㅋㅋ&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;구매링크&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://incatos.shop/surl/P/11&quot;&gt;https://incatos.shop/surl/P/11&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1671456264953&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;product&quot; data-og-title=&quot;도커 컨테이너 각티슈 휴지 케이스 티슈커버 사각 아크릴 각티슈 커버 - 푸르가즘&quot; data-og-description=&quot;교환 및 반품 주소 &amp;nbsp;- &amp;nbsp; 교환 및 반품이 가능한 경우 &amp;nbsp;-&amp;nbsp;계약내용에 관한 서면을 받은 날부터&amp;nbsp;7일.&amp;nbsp;단,&amp;nbsp;그 서면을 받은 때보다 재화등의 공급이 늦게 이루어진 경우에는 재화등을 공급받거나 &quot; data-og-host=&quot;incatos.shop&quot; data-og-source-url=&quot;https://incatos.shop/surl/P/11&quot; data-og-url=&quot;https://incatos.shop/product/도커-컨테이너-각티슈-휴지-케이스-티슈커버-사각-아크릴-각티슈-커버/11/&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/cMH243/hyQWykiWWF/VNE73AuAX6YEfMujfFsV2k/img.png?width=500&amp;amp;height=500&amp;amp;face=0_0_500_500,https://scrap.kakaocdn.net/dn/bZAHjX/hyQWtJ4tSW/DxongYTPrVliH7zXtxKuZk/img.png?width=500&amp;amp;height=500&amp;amp;face=0_0_500_500,https://scrap.kakaocdn.net/dn/ccCLdE/hyQWxTf7wg/48rRl3HkTYXg55eROCmMbk/img.png?width=500&amp;amp;height=500&amp;amp;face=0_0_500_500&quot;&gt;&lt;a href=&quot;https://incatos.shop/surl/P/11&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://incatos.shop/surl/P/11&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/cMH243/hyQWykiWWF/VNE73AuAX6YEfMujfFsV2k/img.png?width=500&amp;amp;height=500&amp;amp;face=0_0_500_500,https://scrap.kakaocdn.net/dn/bZAHjX/hyQWtJ4tSW/DxongYTPrVliH7zXtxKuZk/img.png?width=500&amp;amp;height=500&amp;amp;face=0_0_500_500,https://scrap.kakaocdn.net/dn/ccCLdE/hyQWxTf7wg/48rRl3HkTYXg55eROCmMbk/img.png?width=500&amp;amp;height=500&amp;amp;face=0_0_500_500');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;도커 컨테이너 각티슈 휴지 케이스 티슈커버 사각 아크릴 각티슈 커버 - 푸르가즘&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;교환 및 반품 주소 &amp;nbsp;- &amp;nbsp; 교환 및 반품이 가능한 경우 &amp;nbsp;-&amp;nbsp;계약내용에 관한 서면을 받은 날부터&amp;nbsp;7일.&amp;nbsp;단,&amp;nbsp;그 서면을 받은 때보다 재화등의 공급이 늦게 이루어진 경우에는 재화등을 공급받거나&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;incatos.shop&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>MLops/Container</category>
      <category>container model customize docker</category>
      <category>docker container</category>
      <category>docker container model customize</category>
      <category>docker container tissue box</category>
      <category>docker tissue box</category>
      <category>docker tissue case</category>
      <category>도커각티슈케이스</category>
      <category>도커컨테이너</category>
      <category>도커티슈박스</category>
      <category>도커휴지케이스</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/320</guid>
      <comments>https://acdongpgm.tistory.com/320#entry320comment</comments>
      <pubDate>Mon, 19 Dec 2022 22:42:06 +0900</pubDate>
    </item>
    <item>
      <title>[NLP]. SentenceTransformer 모델 TensorFlow로 불러오기</title>
      <link>https://acdongpgm.tistory.com/318</link>
      <description>&lt;blockquote data-ke-style=&quot;style3&quot;&gt;참고 : &lt;a href=&quot;https://www.philschmid.de/tensorflow-sentence-transformers&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.philschmid.de/tensorflow-sentence-transformers&lt;/a&gt;&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;HuggingFace 에는 Tensorflow 모델 형식인 h5 파일이 없는상태&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1472&quot; data-origin-height=&quot;806&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bdFjKn/btrTt1Y4UN7/s7tlYSXPDLQLVwWYxfAWck/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bdFjKn/btrTt1Y4UN7/s7tlYSXPDLQLVwWYxfAWck/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bdFjKn/btrTt1Y4UN7/s7tlYSXPDLQLVwWYxfAWck/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbdFjKn%2FbtrTt1Y4UN7%2Fs7tlYSXPDLQLVwWYxfAWck%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;557&quot; height=&quot;305&quot; data-origin-width=&quot;1472&quot; data-origin-height=&quot;806&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;h5 모델이 없는 상태에서도 Tensorflow 모델로 불러올 수 있다.&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot; /&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;클래스 구현&lt;/b&gt;&lt;/h4&gt;
&lt;pre id=&quot;code_1670855216941&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;import tensorflow as tf
from typing import Union , List
from transformers import TFAutoModel
from transformers import AutoTokenizer

class TFSentenceTransformer(tf.keras.layers.Layer):
    def __init__(self, model_name_or_path):
        super(TFSentenceTransformer, self).__init__()
        # loads transformers model
        self.model = TFAutoModel.from_pretrained(model_name_or_path, from_pt=True)
        self.tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)

    def call(self, inputs, normalize=False):
        # runs model on inputs
        model_output = self.model(inputs)
        # Perform pooling. In this case, mean pooling.
        embeddings = self.mean_pooling(model_output, inputs[&quot;attention_mask&quot;])
        # normalizes the embeddings if wanted
        if normalize:
          embeddings = self.normalize(embeddings)
        return embeddings
    
    def encode(self, sentence : Union[str,List], normalize=False):
        inputs = self.tokenizer(sentence, padding=True, truncation=True, return_tensors='tf')
        features = self.call(inputs, normalize=normalize)
        
        if type(sentence) == str:
            return features[0]
        
        return features

    def mean_pooling(self, model_output, attention_mask):
        token_embeddings = model_output[0] # First element of model_output contains all token embeddings
        input_mask_expanded = tf.cast(
            tf.broadcast_to(tf.expand_dims(attention_mask, -1), tf.shape(token_embeddings)),
            tf.float32
        )
        return tf.math.reduce_sum(token_embeddings * input_mask_expanded, axis=1) / tf.clip_by_value(tf.math.reduce_sum(input_mask_expanded, axis=1), 1e-9, tf.float32.max)
    
    def normalize(self, embeddings):
      embeddings, _ = tf.linalg.normalize(embeddings, 2, axis=1)
      return embeddings&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;임베딩&lt;/b&gt;&lt;/h4&gt;
&lt;pre id=&quot;code_1670855293355&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;model_id = 'j5ng/sentence-klue-roberta-base'

model = TFSentenceTransformer(model_id)

print(model.encode(&quot;안녕하세요&quot;)[:5])&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;tf.Tensor([-0.2527147&amp;nbsp;&amp;nbsp;-0.05963629&amp;nbsp;&amp;nbsp;0.16842306&amp;nbsp;&amp;nbsp;0.15223232&amp;nbsp;&amp;nbsp;0.5281706&amp;nbsp;],&amp;nbsp;shape=(5,),&amp;nbsp;dtype=float32)&lt;/p&gt;</description>
      <category>Machine learning/NLP</category>
      <category>Embedding</category>
      <category>huggingface</category>
      <category>sbert</category>
      <category>SentenceTransformer</category>
      <category>TensorFlow</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/318</guid>
      <comments>https://acdongpgm.tistory.com/318#entry318comment</comments>
      <pubDate>Mon, 12 Dec 2022 23:32:33 +0900</pubDate>
    </item>
    <item>
      <title>[NLP]. Sentence-Transformer 모델 onnx 형식으로 변환하기</title>
      <link>https://acdongpgm.tistory.com/317</link>
      <description>&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;HuggingFace에 등록된 모델을 불러와 onnx 파일 형식으로 저장하기&lt;/b&gt;&lt;/h4&gt;
&lt;pre id=&quot;code_1670854460886&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;from pathlib import Path

from transformers.convert_graph_to_onnx import convert
convert(framework=&quot;pt&quot;, model=&quot;j5ng/sentence-klue-roberta-base&quot;, output=Path(&quot;onnx_models/trfs-model.onnx&quot;), opset=11)​&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;Logs&lt;/b&gt;&lt;/p&gt;
&lt;table style=&quot;border-collapse: collapse; width: 100%;&quot; border=&quot;1&quot; data-ke-align=&quot;alignLeft&quot;&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style=&quot;width: 100%;&quot;&gt;ransformers.convert_graph_to_onnx`&amp;nbsp;package&amp;nbsp;is&amp;nbsp;deprecated&amp;nbsp;and&amp;nbsp;will&amp;nbsp;be&amp;nbsp;removed&amp;nbsp;in&amp;nbsp;version&amp;nbsp;5&amp;nbsp;of&amp;nbsp;Transformers&lt;br /&gt;&amp;nbsp;&amp;nbsp;warnings.warn(&lt;br /&gt;ONNX&amp;nbsp;opset&amp;nbsp;version&amp;nbsp;set&amp;nbsp;to:&amp;nbsp;11&lt;br /&gt;Loading&amp;nbsp;pipeline&amp;nbsp;(model:&amp;nbsp;j5ng/sentence-klue-roberta-base,&amp;nbsp;tokenizer:&amp;nbsp;j5ng/sentence-klue-roberta-base)&lt;br /&gt;Creating&amp;nbsp;folder&amp;nbsp;onnx_models&lt;br /&gt;Using&amp;nbsp;framework&amp;nbsp;PyTorch:&amp;nbsp;1.13.0&lt;br /&gt;Found&amp;nbsp;input&amp;nbsp;input_ids&amp;nbsp;with&amp;nbsp;shape:&amp;nbsp;{0:&amp;nbsp;'batch',&amp;nbsp;1:&amp;nbsp;'sequence'}&lt;br /&gt;Found&amp;nbsp;input&amp;nbsp;token_type_ids&amp;nbsp;with&amp;nbsp;shape:&amp;nbsp;{0:&amp;nbsp;'batch',&amp;nbsp;1:&amp;nbsp;'sequence'}&lt;br /&gt;Found&amp;nbsp;input&amp;nbsp;attention_mask&amp;nbsp;with&amp;nbsp;shape:&amp;nbsp;{0:&amp;nbsp;'batch',&amp;nbsp;1:&amp;nbsp;'sequence'}&lt;br /&gt;Found&amp;nbsp;output&amp;nbsp;output0&amp;nbsp;with&amp;nbsp;shape:&amp;nbsp;{0:&amp;nbsp;'batch',&amp;nbsp;1:&amp;nbsp;'sequence'}&lt;br /&gt;Found&amp;nbsp;output&amp;nbsp;output1&amp;nbsp;with&amp;nbsp;shape:&amp;nbsp;{0:&amp;nbsp;'batch'}&lt;br /&gt;Ensuring&amp;nbsp;inputs&amp;nbsp;are&amp;nbsp;in&amp;nbsp;correct&amp;nbsp;order&lt;br /&gt;position_ids&amp;nbsp;is&amp;nbsp;not&amp;nbsp;present&amp;nbsp;in&amp;nbsp;the&amp;nbsp;generated&amp;nbsp;input&amp;nbsp;list.&lt;br /&gt;Generated&amp;nbsp;inputs&amp;nbsp;order:&amp;nbsp;['input_ids',&amp;nbsp;'attention_mask',&amp;nbsp;'token_type_ids']&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;338&quot; data-origin-height=&quot;98&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/zPjKp/btrTulJQ4zE/ImnChUZHC7hMA66R1CVKfk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/zPjKp/btrTulJQ4zE/ImnChUZHC7hMA66R1CVKfk/img.png&quot; data-alt=&quot;모델 생성&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/zPjKp/btrTulJQ4zE/ImnChUZHC7hMA66R1CVKfk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FzPjKp%2FbtrTulJQ4zE%2FImnChUZHC7hMA66R1CVKfk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;338&quot; height=&quot;98&quot; data-origin-width=&quot;338&quot; data-origin-height=&quot;98&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;모델 생성&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;주의할점:&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;- output 인자 값의 input Type을 Path 클래스로 넣어줘야한다. ( 버전이 업데이트 되면서 바뀐 것 같습니다.)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;- 모델 저장 폴더 &quot;onnx_models&quot; 에는 아무것도 없어야 한다. ( 폴더안에 파일이 하나라도 있으면 에러 발생 )&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;ONNX 모델로 불러와서 임베딩하기&lt;/b&gt;&lt;/h4&gt;
&lt;pre id=&quot;code_1670854920923&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;from onnxruntime import InferenceSession
import torch

sess = InferenceSession(&quot;onnx_models/trfs-model.onnx&quot;, providers=[&quot;CPUExecutionProvider&quot;])

def mean_pooling(model_output, attention_mask):
    model_output = torch.from_numpy(model_output[0])
    token_embeddings = model_output #First element of model_output contains all token embeddings
    attention_mask = torch.from_numpy(attention_mask)
    input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size())
    sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
    sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
    return sum_embeddings / sum_mask, input_mask_expanded, sum_mask

from transformers import AutoTokenizer
# Using bert-base-uncased because Sentence Transformers uses the same
tokenizer = AutoTokenizer.from_pretrained(&quot;j5ng/sentence-klue-roberta-base&quot;)

query = &quot;안녕하세요&quot;

model_inputs = tokenizer(query, return_tensors=&quot;pt&quot;)
inputs_onnx = {k: v.cpu().detach().numpy() for k, v in model_inputs.items()}

sequence = sess.run(None, inputs_onnx)

sentence_embeddings = mean_pooling(sequence, inputs_onnx['attention_mask'])

print(sentence_embeddings[0][0][:5])&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;tensor([-0.2527, -0.0596,&amp;nbsp;&amp;nbsp;0.1684,&amp;nbsp;&amp;nbsp;0.1522,&amp;nbsp;&amp;nbsp;0.5282]) 로 SentenceTransformer로 임베딩한 값과 일치함.&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot; /&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;* 추가 사항&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;onnx to keras model&lt;/b&gt;&lt;/h4&gt;
&lt;pre id=&quot;code_1670854779466&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;import onnx
from onnx2keras import onnx_to_keras

# Load ONNX model
onnx_model = onnx.load('onnx_models/trfs-model.onnx')

input_all = [node.name for node in onnx_model.graph.input]
print(input_all)

# Call the converter (input - is the main model input name, can be different for your model)
k_model = onnx_to_keras(onnx_model, input_all)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;ValueError:&amp;nbsp;'/Unsqueeze_output_0/'&amp;nbsp;is&amp;nbsp;not&amp;nbsp;a&amp;nbsp;valid&amp;nbsp;root&amp;nbsp;scope&amp;nbsp;name.&amp;nbsp;A&amp;nbsp;root&amp;nbsp;scope&amp;nbsp;name&amp;nbsp;has&amp;nbsp;to&amp;nbsp;match&amp;nbsp;the&amp;nbsp;following&amp;nbsp;pattern:&amp;nbsp;^[A-Za-z0-9.][A-Za-z0-9_.\\/&amp;gt;-]*$&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;tensorflow 의 네이밍 정책에 의해 오류가 나는 모습이다.. 하... Torch , TF 둘이 사이좋게좀 지내라.&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;</description>
      <category>Machine learning/NLP</category>
      <category>onnx</category>
      <category>sbert</category>
      <category>sentenceTransformer to onnx</category>
      <category>TensorFlow</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/317</guid>
      <comments>https://acdongpgm.tistory.com/317#entry317comment</comments>
      <pubDate>Mon, 12 Dec 2022 23:23:48 +0900</pubDate>
    </item>
    <item>
      <title>[Docker]. Amzon Linux Docker and docker-compose 설치</title>
      <link>https://acdongpgm.tistory.com/316</link>
      <description>&lt;h3 id=&quot;docker-설치&quot; data-ke-size=&quot;size23&quot;&gt;&lt;b&gt;Docker 설치&lt;/b&gt;&lt;/h3&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;yum 으로 Docker 설치&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1670338212933&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;sudo yum install docker -y&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;Docker&amp;nbsp;실행&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1670338269965&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;sudo service docker start&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;Docker&amp;nbsp;그룹에&amp;nbsp;sudo&amp;nbsp;추가&amp;nbsp;(인스턴스&amp;nbsp;접속&amp;nbsp;후&amp;nbsp;도커&amp;nbsp;바로&amp;nbsp;제어할&amp;nbsp;수&amp;nbsp;있도록)&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1670338305039&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;sudo usermod -aG docker ec2-user&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;auto-start에&amp;nbsp;docker&amp;nbsp;등록&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1670338337197&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;sudo chkconfig docker on&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;도커&amp;nbsp;권한&amp;nbsp;변경&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1670338364129&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;sudo chmod 666 /var/run/docker.sock&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;작동&amp;nbsp;테스트&amp;nbsp;(선택사항)&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1670338402097&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;docker run hello-world&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot; /&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;&lt;b&gt;도커&amp;nbsp;컴포즈(docker-compose)&amp;nbsp;설치&amp;nbsp; &lt;/b&gt;&lt;/h3&gt;
&lt;pre id=&quot;code_1670338426204&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;sudo curl -L https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;권한&amp;nbsp;부여&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1670338444795&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;sudo chmod +x /usr/local/bin/docker-compose&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;버전&amp;nbsp;확인&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1670338461027&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;docker-compose version&lt;/code&gt;&lt;/pre&gt;</description>
      <category>MLops/AWS</category>
      <category>Docker</category>
      <category>docker compose</category>
      <category>docker compose install</category>
      <category>docker install</category>
      <category>도커 설치</category>
      <category>도커 컴포즈 설치</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/316</guid>
      <comments>https://acdongpgm.tistory.com/316#entry316comment</comments>
      <pubDate>Tue, 6 Dec 2022 23:50:16 +0900</pubDate>
    </item>
    <item>
      <title>[ElasticSearch]. 엘라스틱서치 밋업(MeetUp) 참가</title>
      <link>https://acdongpgm.tistory.com/315</link>
      <description>&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1280&quot; data-origin-height=&quot;720&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/x3uMR/btrS1deeyk7/9fi4JKPGh7KOFlift89Sb1/img.jpg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/x3uMR/btrS1deeyk7/9fi4JKPGh7KOFlift89Sb1/img.jpg&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/x3uMR/btrS1deeyk7/9fi4JKPGh7KOFlift89Sb1/img.jpg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fx3uMR%2FbtrS1deeyk7%2F9fi4JKPGh7KOFlift89Sb1%2Fimg.jpg&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;611&quot; height=&quot;344&quot; data-origin-width=&quot;1280&quot; data-origin-height=&quot;720&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;며칠 전 한국 엘라스틱 서치 그룹에서 진행하는 밋업에 다녀왔다.&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;사실 밋업 행사는 처음이라 어떻게 진행될까 궁금했고,&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;엘라스틱서치는 혼자서 독학으로 배운 서비스라 같은 처지에 놓인 사람들을 만나보고 싶어서 방문했다.&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style6&quot; /&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;행사 내용은 엘라스틱서치를 잘 사용하고 있는 기업에서 발표주제를 정해&lt;br /&gt;발표하는 세션이 있었고 자연어처리 주제는 아니었지만 흥미로웠다.&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;일단 내가 사용하면서 개선해야겠다고 마음만 먹었던 부분들을&lt;br /&gt;그냥 넘어갔는데 발표 하신분께서는 집요하게 파고들어 찾아내셨다.&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style1&quot;&gt;&lt;span style=&quot;font-family: 'Noto Serif KR';&quot;&gt;엘라스틱&amp;nbsp;서치&amp;nbsp;검색&amp;nbsp;쿼리가&amp;nbsp;복잡해질수록&amp;nbsp;알아보기&amp;nbsp;어려운데&lt;/span&gt;&lt;br /&gt;&lt;span style=&quot;font-family: 'Noto Serif KR';&quot;&gt;RDBMS의&amp;nbsp;ORM&amp;nbsp;처럼&amp;nbsp;사용할 순&amp;nbsp;없을까?&lt;/span&gt;&lt;/blockquote&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;고민만 하고 바빠서 그냥 넘어갔던 게 쉽게 얻어갈 수 있었다.&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;내가 알아보려고 했으면 적어도 이틀은 걸렸을 것이다.&lt;br /&gt;밋업에서 얻어가는 것이 지식뿐만이 아니라 수많은 시행착오가 줄어든다고 생각한다.&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;또 내가 이해하지 못하는 용어들은 메모해두고 정리해 볼 생각이다.&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;ingest pipeline , nested , automatic discovery 등등..&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;앞으로 2개월에 한 번씩 열어주신다고 하셨으니까 특별한 일 없으면 얼굴도 비추고&lt;br /&gt;다른 사람들과 많은 이야기도 나눠보고 싶다...(이번에 이야기를 못 나눠봐서 아쉬웠음)&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;그리고 기회가 된다면 자연어 처리 쪽으로 발표도 해보고 싶다.&lt;br /&gt;발표를 해야 진정한 내 것이 되니까!&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;끝..&lt;/p&gt;</description>
      <category>API/ElasticSearch</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/315</guid>
      <comments>https://acdongpgm.tistory.com/315#entry315comment</comments>
      <pubDate>Tue, 6 Dec 2022 23:47:52 +0900</pubDate>
    </item>
    <item>
      <title>[위로봇 프로젝트]. 오복이 육아일기 2일차 - 설계</title>
      <link>https://acdongpgm.tistory.com/314</link>
      <description>&lt;p data-ke-size=&quot;size16&quot; style=&quot;text-align: left;&quot;&gt;&lt;a href=&quot;https://acdongpgm.tistory.com/313&quot; target=&quot;_blank&quot;&gt;&lt;span&gt;2022.12.06 - [Chatbot] - [위로봇 프로젝트]. 오복이 육아일기 1일차 - 소개&lt;/span&gt;&lt;/a&gt;&lt;br&gt; &lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot; style=&quot;text-align: left;&quot;&gt;&lt;b&gt;위로봇 오복이의 프로세스는 아주 간단하게 설계되어있습니다.&lt;/b&gt;&lt;br&gt;&lt;b&gt;웹 서버는 Python 언어를 기반으로한 FastAPI를 사용했습니다.&lt;/b&gt;&lt;br&gt;&lt;b&gt;웹 서버와 같은 인스턴스 안에는 엘라스틱서치 검색엔진이 있습니다.&lt;/b&gt;&lt;/p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1804&quot; data-origin-height=&quot;924&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/Qe8Pq/btrSZ3wJar4/I3nFktOig27wkDJ1SrtJq0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/Qe8Pq/btrSZ3wJar4/I3nFktOig27wkDJ1SrtJq0/img.png&quot; data-alt=&quot; 오복이 설계도 &quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/Qe8Pq/btrSZ3wJar4/I3nFktOig27wkDJ1SrtJq0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FQe8Pq%2FbtrSZ3wJar4%2FI3nFktOig27wkDJ1SrtJq0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;770&quot; height=&quot;394&quot; data-origin-width=&quot;1804&quot; data-origin-height=&quot;924&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt; 오복이 설계도 &lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p data-ke-size=&quot;size16&quot; style=&quot;text-align: left;&quot;&gt; &lt;/p&gt;
&lt;hr data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style6&quot;&gt;
&lt;h2 style=&quot;text-align: left;&quot; data-ke-size=&quot;size26&quot;&gt;데이터 흐름 시나리오&lt;/h2&gt;
&lt;h3 style=&quot;text-align: left;&quot; data-ke-size=&quot;size23&quot;&gt;1. 카카오톡 채널 유저 채팅&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot; style=&quot;text-align: left;&quot;&gt;카카오톡 채널에 접속한 유저가 채팅을 전송하면&lt;br&gt;채팅 데이터는 직접 구축한 GCP 서버로 요청을 보냅니다.&lt;br&gt; &lt;/p&gt;
&lt;h3 style=&quot;text-align: left;&quot; data-ke-size=&quot;size23&quot;&gt;2. FastAPI Web Server &lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot; style=&quot;text-align: left;&quot;&gt;웹 서버인 fastapi는 요청을 받아&lt;br&gt;질문의 텍스트만 파씽하고 질문을 엘라스틱서치 쿼리로 전송합니다.&lt;br&gt; &lt;/p&gt;
&lt;h3 style=&quot;text-align: left;&quot; data-ke-size=&quot;size23&quot;&gt;3. ElasticSearch&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot; style=&quot;text-align: left;&quot;&gt;검색엔진인 엘라스틱서치는 질문의 텍스트와&lt;br&gt;가장 비슷한 질문을 검색하고 그 질문에 대한 대답을 응답으로 웹 서버에 보내줍니다.&lt;br&gt; &lt;/p&gt;
&lt;h3 style=&quot;text-align: left;&quot; data-ke-size=&quot;size23&quot;&gt;4. 카카오톡 채널 유저 답변&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot; style=&quot;text-align: left;&quot;&gt;웹서버는 검색엔진을 통해 나온 응답을 카카오톡 템플릿으로 형식을 맞춰&lt;br&gt;카카오 채널로 응답을 내보냅니다.&lt;br&gt; &lt;/p&gt;
&lt;hr data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style6&quot;&gt;
&lt;p data-ke-size=&quot;size16&quot; style=&quot;text-align: left;&quot;&gt;안전한 서비스를 위해 엘라스틱서치 서버와 웹 서버는 분리하는게 좋습니다.&lt;br&gt;하지만 비용적인 측면에서 서버 한대에 다 몰아 넣었습니다.&lt;br&gt;다음에는 웹 서버 구축과 엘라스틱 서치 설치 방법에 대해 포스팅해보겠습니다.&lt;/p&gt;</description>
      <category>Machine learning/Chatbot</category>
      <category>챗봇 구현</category>
      <category>챗봇 설계</category>
      <category>카카오톡 챗봇 구현</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/314</guid>
      <comments>https://acdongpgm.tistory.com/314#entry314comment</comments>
      <pubDate>Tue, 6 Dec 2022 23:42:38 +0900</pubDate>
    </item>
    <item>
      <title>[위로봇 프로젝트]. 오복이 육아일기 1일차 - 소개</title>
      <link>https://acdongpgm.tistory.com/313</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;안녕하세요 오늘부터 토이 프로젝트로&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;카카오톡 채널을 이용한 챗봇을 만들어보기로 했습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;http://pf.kakao.com/_BNZRb&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;http://pf.kakao.com/_BNZRb&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;620&quot; data-origin-height=&quot;348&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bPhDwh/btrS08qmRrL/h3t2CcerH6KePLtgthIF7K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bPhDwh/btrS08qmRrL/h3t2CcerH6KePLtgthIF7K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bPhDwh/btrS08qmRrL/h3t2CcerH6KePLtgthIF7K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbPhDwh%2FbtrS08qmRrL%2Fh3t2CcerH6KePLtgthIF7K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;363&quot; height=&quot;204&quot; data-origin-width=&quot;620&quot; data-origin-height=&quot;348&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;사실 이미 만들었지만 중간메모를 안 하고 개발하니 정리가 안 되는 부분이 있어서 처음부터 하나하나 기록하면서&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다시 개발해보고자 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;카카오톡 오픈빌더를 연동하여 챗봇을 만들고자 하는 분들께 참고가 되면 좋겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot; /&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;&lt;b&gt;1. 캐릭터 소개&lt;/b&gt;&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;DALL&amp;amp;amp;amp;amp;amp;middot;E 2022-11-18 17.17.38 - A very cute counselor seal animation character.png&quot; data-origin-width=&quot;1024&quot; data-origin-height=&quot;1024&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/m72Aa/btrS09bIyIS/vdJOCWHkdeqUjzZP7hPRFK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/m72Aa/btrS09bIyIS/vdJOCWHkdeqUjzZP7hPRFK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/m72Aa/btrS09bIyIS/vdJOCWHkdeqUjzZP7hPRFK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fm72Aa%2FbtrS09bIyIS%2FvdJOCWHkdeqUjzZP7hPRFK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;253&quot; height=&quot;253&quot; data-filename=&quot;DALL&amp;amp;amp;amp;amp;middot;E 2022-11-18 17.17.38 - A very cute counselor seal animation character.png&quot; data-origin-width=&quot;1024&quot; data-origin-height=&quot;1024&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;이름 : 오복이&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;직업 : 심리상담사&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;종류? : 물범&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;&quot;위로봇 오복이&quot;는 인공지능 비서가 아닙니다.&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;오복이는 내가 한 질문에 대해 위로로 답변해주는 감성 대화 챗봇입니다.&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;오복이는 정해진 시나리오로 답변하지 않습니다.(직접구현한 API를 통해 답변합니다.)&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;오복이는 검색시스템을 통해 질문과 가장 유사한 질문을 찾아 그의 해당하는 답변을 해줍니다.&amp;nbsp;&lt;/b&gt;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;TMI&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;저 캐릭터 이미지는 Open-AI의 인공지능(DALL-E)이 생성한 이미지입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&quot;애니메이션으로 된 착하고 똑똑한 심리상담가 물범&quot; 이런식으로 생성했던 것 같습니다 ㅋㅋ&lt;/p&gt;
&lt;p data-ke-size=&quot;size14&quot;&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;+ 다른 이미지들을 쓰려다가 저작권 문제가 있을까봐 그냥 생성했습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot; /&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;&lt;b&gt;2.&amp;nbsp; 사용 데이터&lt;/b&gt;&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;1. 송영숙 님의 챗봇 데이터&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;&lt;span style=&quot;background-color: #ffffff; color: #1c3aa9;&quot;&gt;&lt;/span&gt;&lt;a href=&quot;https://github.com/songys/Chatbot_data&quot;&gt;https://github.com/songys/Chatbot_data&lt;/a&gt;&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;2. AI-HUB 웰니스 상담 데이터&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;&lt;span style=&quot;background-color: #ffffff; color: #1c3aa9;&quot;&gt;&lt;/span&gt;&lt;a href=&quot;https://aihub.or.kr/opendata/keti-data/recognition-laguage/KETI-02-006&quot;&gt;https://aihub.or.kr/opendata/keti-data/recognition-laguage/KETI-02-006&lt;/a&gt;&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;3. AI-HUB 감성대화 말뭉치&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&amp;amp;topMenu=100&amp;amp;aihubDataSe=realm&amp;amp;dataSetSn=86&quot;&gt;https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&amp;amp;topMenu=100&amp;amp;aihubDataSe=realm&amp;amp;dataSetSn=86&lt;/a&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;위 3가지의 데이터를 종합해서 사용했습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;대답이 질문으로 끝나는 경우는 제거하였습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;*싱글턴 대화이기 때문&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot; /&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;&lt;b&gt;3. 준비물&lt;/b&gt;&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;1. 서버&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;- 배포를 하려면 서버가 필요한데 일단 구글 클라우드 플랫폼(GCP)의 무료 3개월 크리딧을 사용했습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;- 스펙은 VM인스턴스 : e2-medium&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; CPU : 2-core , 메모리 : 4GB&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;2. 카카오톡 채널 &amp;amp; 오픈빌더&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;- 회원 가입하고 채널 만들고 나면 어느 정도 승인기간이 있었던 것 같습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;- 가입절차 및 채널 생성은 생략하겠습니다..&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style6&quot; /&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;마치며..&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;카카오톡 채널의 챗봇 서비스는 원래 유료였지만 이제는 무료화를 선언하였습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;개발을 하게 된 이유는 챗봇 개발자로서 간단한 챗봇을 배포해보고자 하는 게 제일 컸고&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;무엇보다도 사용자들이 내 챗봇을 사용하면서 발생하는 데이터를 수집할 수 있기 때문입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;지속적인 기능발전을 통해 채널과 함께 성장하고 하는 게 저의 목표입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;감사합니다.&lt;/p&gt;</description>
      <category>Machine learning/Chatbot</category>
      <category>나만의 챗봇만들기</category>
      <category>오복이</category>
      <category>위로봇 오복이</category>
      <category>챗봇 만들기</category>
      <category>카카오 오픈빌더</category>
      <category>카카오톡 채널</category>
      <category>카카오톡 챗봇</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/313</guid>
      <comments>https://acdongpgm.tistory.com/313#entry313comment</comments>
      <pubDate>Tue, 6 Dec 2022 20:34:56 +0900</pubDate>
    </item>
    <item>
      <title>[ElasticSearch]. 한국어 형태소 분석기 nori_analyzer 사용하기</title>
      <link>https://acdongpgm.tistory.com/312</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;엘라스틱 서치에서 사용하는 토크나이저를 파이썬 클라이언트로 연동해서 사용할 수 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;굳이 엘라스틱서치에서 작동되는걸 파이썬으로 가져와서 연동해야할 필요가 있을까 하지만&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;검색 결과로 나오는 BM25 score 말고 분리된 형태소 간 유사도를 파악하기 위해 사용했다. &lt;span style=&quot;color: #ee2323;&quot;&gt;*(0~1)로 치환되는 값이 필요함.&lt;/span&gt;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Mapping 정보&lt;/h4&gt;
&lt;pre id=&quot;code_1670240464521&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;&quot;analysis&quot;: {
    &quot;analyzer&quot;: {
        &quot;nori_token_analyzer&quot;: {
            &quot;type&quot;: &quot;custom&quot;,
            &quot;tokenizer&quot;: &quot;nori_base_tokenizer&quot;
        }
    },
    &quot;tokenizer&quot;: {
        &quot;nori_base_tokenizer&quot;: {
            &quot;type&quot;: &quot;nori_tokenizer&quot;,
            &quot;decompound_mode&quot;: &quot;mixed&quot;,
            &quot;discard_punctuation&quot;: false
        }
    }
}&lt;/code&gt;&lt;/pre&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&amp;nbsp;&lt;/h4&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;파이썬 클라이언트 연동&lt;/h4&gt;
&lt;pre id=&quot;code_1670240542245&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;from elasticsearch import Elasticsearch
from config import ELASTIC_HOST

class ElasticSearch:
    def __init__(self):
        self.client = None

    def connect(self):
        self.client = Elasticsearch(hosts=ELASTIC_HOST)

    def close(self):
        self.client.close()


elastic = ElasticSearch()&lt;/code&gt;&lt;/pre&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&amp;nbsp;&lt;/h4&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;형태소 분석기 함수 선언&lt;/h4&gt;
&lt;pre id=&quot;code_1670240414638&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;def analyzer(question):
    if elastic.client == None:
        elastic.connect()
    res = elastic.client.indices.analyze(
        index=INDEX_NAME,
        analyzer=&quot;nori_token_analyzer&quot;,
        text=question,
        attributes=[&quot;leftPOS&quot;],
        explain=True,
    )
    pos_tag = [
        (i[&quot;token&quot;], i[&quot;leftPOS&quot;])
        for i in res[&quot;detail&quot;][&quot;tokenizer&quot;][&quot;tokens&quot;]
        if i[&quot;leftPOS&quot;] != &quot;SP(Space)&quot;
    ]
    return pos_tag&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;테스트 해보기&lt;/p&gt;
&lt;pre id=&quot;code_1670240662942&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;print(analyzer(&quot;오늘 저녁 뭐먹을까 추천해줘~~&quot;))&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;[('오늘', 'MAG(General Adverb)'), ('저녁', 'NNG(General Noun)'), ('뭐', 'IC(Interjection)'), ('먹', 'VV(Verb)'), ('을까', 'E(Verbal endings)'), ('추천', 'NNG(General Noun)'), ('해', 'XSV(Verb Suffix)'), ('줘', 'VX(Auxiliary Verb or Adjective)'), ('~~', 'SY(Other symbol)')]&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;끝.&lt;/p&gt;</description>
      <category>API/ElasticSearch</category>
      <category>Analyzer</category>
      <category>Elasicsearch</category>
      <category>NORI</category>
      <category>노리 형태소 분석기</category>
      <category>엘라스틱서치</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/312</guid>
      <comments>https://acdongpgm.tistory.com/312#entry312comment</comments>
      <pubDate>Mon, 5 Dec 2022 20:45:47 +0900</pubDate>
    </item>
    <item>
      <title>[chatbot]. 핑퐁 빌더 API 연동하기</title>
      <link>https://acdongpgm.tistory.com/311</link>
      <description>&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;핑퐁API는 스케터랩에서 서비스하고있는 일상대화 챗봇이고&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;한달에 300건 까지의 호출은 무료로 사용할 수 있다.&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;그래서 핑퐁 API를 파이썬에 연결하여 테스트해보고자 한다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2210&quot; data-origin-height=&quot;1488&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/c9Tu9h/btrSM44GyS9/BQIiqqfflGNm6ZfLABOHhk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/c9Tu9h/btrSM44GyS9/BQIiqqfflGNm6ZfLABOHhk/img.png&quot; data-alt=&quot;Custom API 사용&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/c9Tu9h/btrSM44GyS9/BQIiqqfflGNm6ZfLABOHhk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fc9Tu9h%2FbtrSM44GyS9%2FBQIiqqfflGNm6ZfLABOHhk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;694&quot; height=&quot;467&quot; data-origin-width=&quot;2210&quot; data-origin-height=&quot;1488&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;Custom API 사용&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;카카오톡이나 페이스북에 바로 붙여서 사용할 수도 있다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1504&quot; data-origin-height=&quot;878&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/nl9H1/btrSW66YY2W/lYBoSskNXOk4s3FqfbDXok/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/nl9H1/btrSW66YY2W/lYBoSskNXOk4s3FqfbDXok/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/nl9H1/btrSW66YY2W/lYBoSskNXOk4s3FqfbDXok/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fnl9H1%2FbtrSW66YY2W%2FlYBoSskNXOk4s3FqfbDXok%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;570&quot; height=&quot;333&quot; data-origin-width=&quot;1504&quot; data-origin-height=&quot;878&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;회원 가입을 하고 봇을 생성하면 API를 사용할 수 있다.&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;request URL 과 Authorization 인증키가 주어진다.&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre id=&quot;code_1670239885903&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;import requests

PINGPONG_API_KEY = {발급받은 API_KEY}
PINGPONG_URL = {발급받은 URL}

def ping_pong_reply(question):
    url = PINGPONG_URL + &quot;10&quot; #여기 숫자 10은 세션 id (유저구별시 사용)
    data = {&quot;request&quot;: {&quot;query&quot;: f&quot;{question}&quot;}}
    header = {&quot;Authorization&quot;: f&quot;{PINGPONG_API_KEY}&quot;}
    res = requests.post(url=url, headers=header, json=data).json()
    return res[&quot;response&quot;][&quot;replies&quot;][0][&quot;text&quot;]&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre id=&quot;code_1670240074116&quot; class=&quot;isbl&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;print(ping_pong_reply(&quot;오늘 저녁 뭐먹을까 추천해줘~&quot;))&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Response 확인&lt;/p&gt;
&lt;pre id=&quot;code_1670240044651&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;{'response': {'replies': [{'from': {'score': 0.20030707120895386, 'reaction': '맛있는거요', 'name': 'replyReaction', 'link': '/bot/62c45862e4b0d7787e968b85/reply/reaction?scriptId=62c45862e4b0d7787e968dc1', 'from': '자동 답변 / 리액션 답변'}, 'type': 'text', 'text': '맛있는 거요! 어깨춤이 절로 나올 만큼!!!!  '}]}, 'version': '1.0.0'}&lt;/code&gt;&lt;/pre&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;776&quot; data-origin-height=&quot;38&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/ZN6GU/btrSQTIjZWD/Y17yqGVOk2eExBNoXXLPF1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/ZN6GU/btrSQTIjZWD/Y17yqGVOk2eExBNoXXLPF1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/ZN6GU/btrSQTIjZWD/Y17yqGVOk2eExBNoXXLPF1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FZN6GU%2FbtrSQTIjZWD%2FY17yqGVOk2eExBNoXXLPF1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;776&quot; height=&quot;38&quot; data-origin-width=&quot;776&quot; data-origin-height=&quot;38&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;답변이 너무 귀엽고 참신하네요 ㅎㅎ&amp;nbsp;&lt;/p&gt;</description>
      <category>Machine learning/Chatbot</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/311</guid>
      <comments>https://acdongpgm.tistory.com/311#entry311comment</comments>
      <pubDate>Mon, 5 Dec 2022 20:35:04 +0900</pubDate>
    </item>
    <item>
      <title>[Python]. multiProcessing 대용량 빅데이터 구간별로 전처리하기</title>
      <link>https://acdongpgm.tistory.com/310</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;i&gt;numpy 데이터를 DB 삽입 하려고하다보니&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;i&gt;형식이 맞지않아 삽입을 할 수 없는 상황이 생겼다.&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;i&gt;그래서 Python의 flaot 형태로 변경해서 삽입하려고 시도했고&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;i&gt;numpy 에서 &lt;span&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;tolist() 함수를 통해 &lt;/span&gt;Python float 형태로 변경하면 FP형식이 유지되지않는다.&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size14&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;**numpy FP16을 Python으로 변환해도 FP32로 바뀌어서 변환된다.&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;333&quot; data-origin-height=&quot;143&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/kHgYl/btrQq4E34om/zzCFsJ9R2ZS5BUWonqxug1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/kHgYl/btrQq4E34om/zzCFsJ9R2ZS5BUWonqxug1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/kHgYl/btrQq4E34om/zzCFsJ9R2ZS5BUWonqxug1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FkHgYl%2FbtrQq4E34om%2FzzCFsJ9R2ZS5BUWonqxug1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;333&quot; height=&quot;143&quot; data-origin-width=&quot;333&quot; data-origin-height=&quot;143&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그래서 numpy 형식을 python float32 형태로 변경하고 그것을&amp;nbsp; round 함수를 통해 4째자리까지만 잘라내려고한다.(FP16)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;총 256차원의 &lt;b&gt;7,000,000만건&lt;/b&gt;의 벡터(Vector) 데이터를 전처리해야했고 생각없이 코드를 작성했더니&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;많은 시간(10분이상)이 소요되었고 중간중간 병목현상이 발생하였다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 개념으로만 알고있던 멀티 프로세싱이 떠올랐고 그것을 실제로 전처리에 적용해보았다.&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot; /&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;&lt;b&gt;단일 프로세싱&lt;/b&gt;&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;648&quot; data-origin-height=&quot;107&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bqG5tE/btrQp4FJ81u/hRZrhzuzUj3lSZgkUMc5WK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bqG5tE/btrQp4FJ81u/hRZrhzuzUj3lSZgkUMc5WK/img.png&quot; data-alt=&quot;단일 프로세스 사용&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bqG5tE/btrQp4FJ81u/hRZrhzuzUj3lSZgkUMc5WK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbqG5tE%2FbtrQp4FJ81u%2FhRZrhzuzUj3lSZgkUMc5WK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;648&quot; height=&quot;107&quot; data-origin-width=&quot;648&quot; data-origin-height=&quot;107&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;단일 프로세스 사용&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;단일 프로세스를 사용할 경우 저렇게 8번 cpu와 약간의 10번 cpu가 사용된다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;561&quot; data-origin-height=&quot;84&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bfF04x/btrQrETp0vP/yibBn7bX8r5QILr5a9Qgxk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bfF04x/btrQrETp0vP/yibBn7bX8r5QILr5a9Qgxk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bfF04x/btrQrETp0vP/yibBn7bX8r5QILr5a9Qgxk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbfF04x%2FbtrQrETp0vP%2FyibBn7bX8r5QILr5a9Qgxk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;561&quot; height=&quot;84&quot; data-origin-width=&quot;561&quot; data-origin-height=&quot;84&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;tqdm으로 모니터링했을때 예상 시간은 &lt;span style=&quot;color: #ee2323;&quot;&gt;5분 27초&lt;/span&gt; 이지만 중간중간 &lt;span style=&quot;color: #006dd7;&quot;&gt;&lt;b&gt;병목현상&lt;/b&gt;&lt;/span&gt;이 발생하여 실제로는 &lt;b&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;10분이상 소요&lt;/span&gt;&lt;/b&gt;되었다.&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style6&quot; /&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;&lt;b&gt;멀티프로세싱&lt;/b&gt;&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;blob&quot; data-origin-width=&quot;666&quot; data-origin-height=&quot;125&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/9dA7n/btrQtRR2Ntx/X8aUg2GNClpaFzanBJJ5t0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/9dA7n/btrQtRR2Ntx/X8aUg2GNClpaFzanBJJ5t0/img.png&quot; data-alt=&quot;멀티 프로세싱&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/9dA7n/btrQtRR2Ntx/X8aUg2GNClpaFzanBJJ5t0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F9dA7n%2FbtrQtRR2Ntx%2FX8aUg2GNClpaFzanBJJ5t0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;789&quot; height=&quot;148&quot; data-filename=&quot;blob&quot; data-origin-width=&quot;666&quot; data-origin-height=&quot;125&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;멀티 프로세싱&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;멀티프로세싱을 사용해서 24개의 cpu 중에 16개를 사용하여 함수를 실행시켰다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;559&quot; data-origin-height=&quot;370&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/q0gV0/btrQsG4giDe/GqtmyisjZ3BM5IM2QwvExK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/q0gV0/btrQsG4giDe/GqtmyisjZ3BM5IM2QwvExK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/q0gV0/btrQsG4giDe/GqtmyisjZ3BM5IM2QwvExK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fq0gV0%2FbtrQsG4giDe%2FGqtmyisjZ3BM5IM2QwvExK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;559&quot; height=&quot;370&quot; data-origin-width=&quot;559&quot; data-origin-height=&quot;370&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;7,000,000개의 데이터를 16등분(n_cpu=16)하여 435441로 나눴고 각각의 CPU가 일을처리하는 모습을 볼 수 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;10분에서 -&amp;gt; 1분으로 시간이 1/10으로 줄어들었다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;좋은 컴퓨터 가지고 왜 이걸 이제 알았을까 후회되었지만 이제라도 알았으니 알차게 CPU를 사용해야겠다.&lt;/b&gt;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot; /&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;코드 구현&lt;/b&gt;&lt;/h4&gt;
&lt;pre id=&quot;code_1667619404987&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;import numpy as np
from concurrent.futures import ProcessPoolExecutor
import multiprocessing
import itertools
import time
from tqdm import tqdm

n_cpu = int(multiprocessing.cpu_count() * 0.7) # 내 컴퓨터 CPU 코어 수 * 사용량(0.7)
# ex int(24 * 0.7) == 16&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;numpy 데이터 로드&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1667619423075&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;embedding_npy = np.load('./lassl_embedding_float16.npy')&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;변환 함수&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1667619438093&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;def numpy_to_plist(start, end):
    tmp = []
    for i in tqdm(embedding_npy[start:end]):
        temp = []
        i = i.tolist()
        for j in i:
            temp.append(round(j, 4))
        tmp.append(temp)
        
    return tmp&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;멀티 프로레싱 함수&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1667619570780&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;def func_multi():
    global embedding_npy
    full_len = len(embedding_npy) # 데이터 수
    process_index = int(full_len / n_cpu) # 분할 범위
    
    rng_list = [(i + 1) * process_index for i in range(n_cpu)] # 분할 범위 리스트 생성
    if rng_list[0] != 0:  # 맨앞에 0 추가 ( 0부터 시작해야함 )
        rng_list.insert(0, 0)
    if rng_list[-1] &amp;lt; full_len: # 맨 뒤는 총 데이터수 추가
        rng_list.append(full_len)
    
    with ProcessPoolExecutor(max_workers = n_cpu) as executor:
        flaot16_list = list(executor.map(numpy_to_plist, rng_list[0:-1], rng_list[1:]))
    
    result = list(itertools.chain.from_iterable(flaot16_list)) # 코어별로 나눠진 리스트를 1차원으로 합침
        
    return result&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;주의!!&amp;nbsp; 함수실행은 main 블록안에서 실행해주여야함.&lt;/p&gt;
&lt;pre id=&quot;code_1672134271458&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;if __name__ == &quot;__main__&quot;:
    func_multi()&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending. 이 에러는 n_cpu 수 즉 worker 수를 줄이면 해결된다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;+ tqdm 추가 예제&lt;/p&gt;
&lt;pre id=&quot;code_1678254865084&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;import time  
import concurrent.futures
from tqdm import tqdm

def f(x):
    time.sleep(0.001)  # to visualize the progress
    return x**2

def run(f, my_iter):
    with concurrent.futures.ThreadPoolExecutor() as executor:
        results = list(tqdm(executor.map(f, my_iter), total=len(my_iter)))
    return results

my_iter = range(100000)
run(f, my_iter)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고자료 : &lt;a href=&quot;https://velog.io/@hslim8888/python-%EB%A9%80%ED%8B%B0-%ED%94%84%EB%A1%9C%EC%84%B8%EC%8A%A4%EB%A1%9C-%EC%9E%91%EC%97%85-%EC%86%8D%EB%8F%84-%ED%96%A5%EC%83%81%EC%8B%9C%ED%82%A4%EA%B8%B0&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://velog.io/@hslim8888/python-%EB%A9%80%ED%8B%B0-%ED%94%84%EB%A1%9C%EC%84%B8%EC%8A%A4%EB%A1%9C-%EC%9E%91%EC%97%85-%EC%86%8D%EB%8F%84-%ED%96%A5%EC%83%81%EC%8B%9C%ED%82%A4%EA%B8%B0&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://stackoverflow.com/questions/51601756/use-tqdm-with-concurrent-futures&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://stackoverflow.com/questions/51601756/use-tqdm-with-concurrent-futures&lt;/a&gt;&lt;/p&gt;</description>
      <category>Python</category>
      <category>Numpy</category>
      <category>Python</category>
      <category>데이터 전처리</category>
      <category>멀티프로세싱</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/310</guid>
      <comments>https://acdongpgm.tistory.com/310#entry310comment</comments>
      <pubDate>Sat, 5 Nov 2022 12:44:38 +0900</pubDate>
    </item>
    <item>
      <title>[NLP]. 오타 생성기 구현하기 : Text Noise Augmentation</title>
      <link>https://acdongpgm.tistory.com/309</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;오타 생성기 아이디어:&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;오타는 누르고자 하는 것을 잘못 눌렀을때 발생한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;누르고자 하는 글자 주변(키보드)단어들을 랜덤으로 섞어주면 오타 생성기가 만들어지지 않을까?&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;오타 발생 유형&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;&amp;nbsp;(1). 주변 키보드 문자를 잘못 입력함.&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;&amp;nbsp;(2). 쌍자음으로 잘못 입력함.&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;&amp;nbsp;(3). 한영키를 잘못누름&lt;/b&gt;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style6&quot; /&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;오타 사전 작성&lt;/b&gt;&lt;/h4&gt;
&lt;pre id=&quot;code_1667024503710&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;typos_dict = {'ㅂ' : [ 'ㅈ','ㅁ','1','2','q','ㅃ'],
              'ㅃ' : ['ㅂ'],
              'ㅉ' : ['ㅈ'],
              'ㄸ' : ['ㄷ'],
              'ㄲ' : ['ㄱ'],
             'ㅈ' : ['2','3','ㅂ','ㄷ','ㅁ','ㄴ','ㅇ','w','ㅉ'],
             'ㄷ' : ['ㅈ','3','4','ㄱ','ㄴ','ㅇ','e','ㄸ'],
             'ㄱ' : ['ㄷ','4','5','ㅅ','ㅇ','ㄹ','r','ㄲ'],
             'ㅅ' : ['ㄱ','5','6','ㅛ','ㄹ','ㅎ','t','ㅆ'],
             'ㅁ' : ['ㅂ','ㅈ','ㄴ','ㅋ','a'],
             'ㄴ' : ['ㅁ','ㅈ','ㄷ','ㅇ','ㅋ','ㅌ','s'],
             'ㅇ' : ['ㄴ','ㄷ','ㄱ','ㄹ','ㅌ','ㅊ','d'],
             'ㄹ' : ['ㅇ','ㄱ','ㅅ','ㅎ','ㅊ','ㅍ','f'],
             'ㅎ' : ['ㄹ','ㅅ','ㅛ','ㅗ','ㅍ','ㅠ','g'],
             'ㅋ' : ['ㅁ','ㄴ','ㅊ','z'],
             'ㅌ' : ['ㅋ','ㄴ','ㅇ','ㅊ','x'],
             'ㅊ' : ['ㅌ','ㅇ','ㄹ','ㅍ','c'],
             'ㅍ' : ['ㅊ','ㄹ','ㅎ','ㅠ','v'],
             'ㅛ' : ['ㅅ','6','7','ㅕ','ㅎ','ㅗ','y'],
             'ㅕ' : ['ㅛ','7','8','ㅑ','ㅗ','ㅓ','u'],
             'ㅑ' : ['ㅕ','8','9','ㅐ','ㅏ','ㅓ','i'],
             'ㅐ' : ['ㅒ','ㅑ','9','0','ㅔ','ㅣ','ㅏ','o'],
             'ㅒ' : ['ㅐ'],
             'ㅔ' : ['ㅐ','0','ㅣ','ㅖ','p'],
             'ㅖ' : ['ㅔ'],
             'ㅗ' : ['ㅎ','ㅛ','ㅕ','ㅓ','ㅠ','ㅜ','h'],
             'ㅓ' : ['ㅗ','ㅕ','ㅑ','ㅏ','ㅜ','ㅡ','j'],
             'ㅏ' : ['ㅓ','ㅑ','ㅐ','ㅣ','ㅡ','k'],
             'ㅣ' : ['ㅏ','ㅐ','ㅔ','l'],
             'ㅠ' : ['ㅍ','ㅎ','ㅗ','ㅜ','b'],
             'ㅜ' : ['ㅠ','ㅗ','ㅓ','ㅡ','n'],
             'ㅡ' : ['ㅜ','ㅓ','ㅏ','m'],
             '?' : ['/'],
             '!' : ['1'],
             ' ' : [' '],
             'ㅢ' : ['ㅢ']}&lt;/code&gt;&lt;/pre&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot; /&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;구현하기&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;랜덤으로 교체해주는 함수 구현&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1667024535686&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;def generate_noise(sentence,mod_num):
    jamo = list(split_syllables(sentence))

    choice_idx = random.sample(range(1,len(jamo)),mod_num)
    choice_char = [jamo[choice_idx[0]],jamo[choice_idx[1]],jamo[choice_idx[2]]]

    jamo[choice_idx[0]] = random.choice(typos_dict[choice_char[0]])
    jamo[choice_idx[1]] = random.choice(typos_dict[choice_char[1]])
    jamo[choice_idx[2]] = random.choice(typos_dict[choice_char[2]])

    return join_jamos(''.join(jamo))&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;** mod_num : 바꿀 자모 수&lt;/b&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;생성하기&lt;/b&gt;&lt;/h4&gt;
&lt;pre id=&quot;code_1667024644148&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;for i in range(10):
    print(generate_noise(&quot;저희 초면인가요? 반갑습니다&quot; , 3))&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;짠~&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;402&quot; data-origin-height=&quot;452&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/wMaVN/btrPQI3myoD/pHV8ggx9RfqPW1Ot5I6dP1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/wMaVN/btrPQI3myoD/pHV8ggx9RfqPW1Ot5I6dP1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/wMaVN/btrPQI3myoD/pHV8ggx9RfqPW1Ot5I6dP1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FwMaVN%2FbtrPQI3myoD%2FpHV8ggx9RfqPW1Ot5I6dP1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;402&quot; height=&quot;452&quot; data-origin-width=&quot;402&quot; data-origin-height=&quot;452&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;</description>
      <category>Machine learning/NLP</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/309</guid>
      <comments>https://acdongpgm.tistory.com/309#entry309comment</comments>
      <pubDate>Sat, 29 Oct 2022 15:25:16 +0900</pubDate>
    </item>
    <item>
      <title>[Spark]. MySQL 연동하기 A-Z(for mac)</title>
      <link>https://acdongpgm.tistory.com/307</link>
      <description>&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Demilight', 'Noto Sans KR';&quot;&gt;MySQL 데이터베이스에 연결하려면 Mysql-java-connector를 다운 받아야함.&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Demilight', 'Noto Sans KR';&quot;&gt;버전의 맞게 다운로드 설치&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Demilight', 'Noto Sans KR';&quot;&gt;&lt;a href=&quot;https://dev.mysql.com/downloads/connector/j/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://dev.mysql.com/downloads/connector/j/&lt;/a&gt;&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1666340452665&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;website&quot; data-og-title=&quot;MySQL :: Download Connector/J&quot; data-og-description=&quot;MySQL Connector/J 8.0 is highly recommended for use with MySQL Server 8.0, 5.7 and 5.6. Please upgrade to MySQL Connector/J 8.0.&quot; data-og-host=&quot;dev.mysql.com&quot; data-og-source-url=&quot;https://dev.mysql.com/downloads/connector/j/&quot; data-og-url=&quot;https://dev.mysql.com/downloads/connector/j/&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/byxXf2/hyQauiSvMs/jLSph1icjFcrvxKmtRZ9Lk/img.png?width=700&amp;amp;height=260&amp;amp;face=0_0_700_260&quot;&gt;&lt;a href=&quot;https://dev.mysql.com/downloads/connector/j/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://dev.mysql.com/downloads/connector/j/&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/byxXf2/hyQauiSvMs/jLSph1icjFcrvxKmtRZ9Lk/img.png?width=700&amp;amp;height=260&amp;amp;face=0_0_700_260');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;MySQL :: Download Connector/J&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;MySQL Connector/J 8.0 is highly recommended for use with MySQL Server 8.0, 5.7 and 5.6. Please upgrade to MySQL Connector/J 8.0.&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;dev.mysql.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Demilight', 'Noto Sans KR';&quot;&gt;설치하고 앞축을 풀어준다.&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;632&quot; data-origin-height=&quot;528&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cmkdY1/btrPeUcBcYT/XP7zeOCIfq86e3w7EaNWRK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cmkdY1/btrPeUcBcYT/XP7zeOCIfq86e3w7EaNWRK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cmkdY1/btrPeUcBcYT/XP7zeOCIfq86e3w7EaNWRK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcmkdY1%2FbtrPeUcBcYT%2FXP7zeOCIfq86e3w7EaNWRK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;394&quot; height=&quot;329&quot; data-origin-width=&quot;632&quot; data-origin-height=&quot;528&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style6&quot; /&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Demilight', 'Noto Sans KR';&quot;&gt;위 파일을 mysql 폴더에 넣어주어야함.&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;494&quot; data-origin-height=&quot;588&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dD6HFd/btrPfOCzISz/7Cc4CORKjyOUkVkR0YN9w0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dD6HFd/btrPfOCzISz/7Cc4CORKjyOUkVkR0YN9w0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dD6HFd/btrPfOCzISz/7Cc4CORKjyOUkVkR0YN9w0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdD6HFd%2FbtrPfOCzISz%2F7Cc4CORKjyOUkVkR0YN9w0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;266&quot; height=&quot;317&quot; data-origin-width=&quot;494&quot; data-origin-height=&quot;588&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style6&quot; /&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Demilight', 'Noto Sans KR';&quot;&gt;폴더 경로 확인&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1258&quot; data-origin-height=&quot;916&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bDyj2p/btrPfXMTK04/8PgeCtFNibKxsCTLnPKl0k/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bDyj2p/btrPfXMTK04/8PgeCtFNibKxsCTLnPKl0k/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bDyj2p/btrPfXMTK04/8PgeCtFNibKxsCTLnPKl0k/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbDyj2p%2FbtrPfXMTK04%2F8PgeCtFNibKxsCTLnPKl0k%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;482&quot; height=&quot;351&quot; data-origin-width=&quot;1258&quot; data-origin-height=&quot;916&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style6&quot; /&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Demilight', 'Noto Sans KR';&quot;&gt;MySQL Base Directory로 이동&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1666340621157&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;cd /usr/local/mysql

open .&lt;/code&gt;&lt;/pre&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style6&quot; /&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1254&quot; data-origin-height=&quot;636&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/lTtJu/btrPgqVuTsQ/rrYiidM3BTOaxgOAAG1m80/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/lTtJu/btrPgqVuTsQ/rrYiidM3BTOaxgOAAG1m80/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/lTtJu/btrPgqVuTsQ/rrYiidM3BTOaxgOAAG1m80/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FlTtJu%2FbtrPgqVuTsQ%2FrrYiidM3BTOaxgOAAG1m80%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;537&quot; height=&quot;636&quot; data-origin-width=&quot;1254&quot; data-origin-height=&quot;636&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Demilight', 'Noto Sans KR';&quot;&gt;mysql-connector-java-8.0.26 폴더를 경로에 넣어주면 셋팅 완료&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot; /&gt;
&lt;pre id=&quot;code_1666340800668&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;import mysql.connector
from pyspark.sql import SparkSession

# spark 세션 연결
spark = SparkSession.builder.config(&quot;spark.jars&quot;, &quot;mysql-connector-java-8.0.26.jar&quot;) \
    .master(&quot;local&quot;).appName(&quot;PySpark_MySQL_test&quot;).getOrCreate()
    
df = (spark
    .read
    .format(&quot;jdbc&quot;)
    .option(&quot;url&quot;, &quot;jdbc:mysql://localhost:3306/TestDB&quot;)
    .option(&quot;driver&quot;, &quot;com.mysql.jdbc.Driver&quot;)
    .option(&quot;dbtable&quot;, &quot;{Table-NAME}&quot;)
    .option(&quot;user&quot;, &quot;root&quot;).option(&quot;password&quot;, &quot;******&quot;)
    .load())&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Demilight', 'Noto Sans KR';&quot;&gt;데이터 확인하기&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1666340902200&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;df.show()&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>RDBMS</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/307</guid>
      <comments>https://acdongpgm.tistory.com/307#entry307comment</comments>
      <pubDate>Fri, 21 Oct 2022 17:32:40 +0900</pubDate>
    </item>
    <item>
      <title>[WebSocket]. 양방향 통신이 가능한 웹소켓이란?</title>
      <link>https://acdongpgm.tistory.com/306</link>
      <description>&lt;article id=&quot;dfd56101-70bb-4360-b284-f5fded219ff2&quot; class=&quot;page sans&quot;&gt;&lt;header&gt;
&lt;h2 class=&quot;page-header-icon undefined&quot; data-ke-size=&quot;size26&quot;&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;&lt;span class=&quot;icon&quot;&gt;  &lt;/span&gt;&lt;b&gt;Web Soket&lt;/b&gt;&lt;/span&gt;&lt;/h2&gt;
&lt;/header&gt;
&lt;div class=&quot;page-body&quot;&gt;
&lt;h2 id=&quot;194506be-22c3-4484-8125-fd236717d5fd&quot; class=&quot;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;&lt;b&gt;웹 소켓 이란?&lt;/b&gt;&lt;/span&gt;&lt;/h2&gt;
&lt;p id=&quot;8f1afbf2-db2b-432a-8cfd-bc97cf111397&quot; class=&quot;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;웹 소켓은 사용자의 브라우저와 서버 사이의 인터액티브 통신 세션을 설정할 수 있게 하는 고급 기술입니다.&lt;/span&gt;&lt;/p&gt;
&lt;p id=&quot;0e70c6d6-e080-4ced-ba9b-273d2762427d&quot; class=&quot;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;표준 프로토콜 중 하나(현재 html5에서 자주 사용됨)&lt;/span&gt;&lt;/p&gt;
&lt;p id=&quot;523f5ee5-d948-46ea-abe7-0df595bb5251&quot; class=&quot;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p id=&quot;868a6be1-e241-4ea8-8765-b554f36a4484&quot; class=&quot;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;웹개발을 하다가 서버랑 클라이언트가 데이터를 주고받으려면&lt;/span&gt;&lt;/p&gt;
&lt;p id=&quot;823aa0a9-8682-4071-8179-aab52ce166ad&quot; class=&quot;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;HTTP 통신을 사용했다. RESTful API로 구현하면됨.&lt;/span&gt;&lt;/p&gt;
&lt;p id=&quot;dfc34794-ebd0-4306-bbdf-5b555fdbfbeb&quot; class=&quot;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p id=&quot;afdc56b6-d61c-4084-a4ba-7e36e6f4c95c&quot; class=&quot;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;하지만 HTTP 통신은 요청을 보내면 응답을 받는 단방향 통신만 가능했지만&lt;/span&gt;&lt;/p&gt;
&lt;p id=&quot;5f1e9edf-49d3-4e42-ba5c-8750a285ab80&quot; class=&quot;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;WebSocket은 TCP 기반으로 양방향 통신이 가능하다.&lt;/span&gt;&lt;/p&gt;
&lt;p id=&quot;3b2b41d2-0086-44ca-8492-504fd9067afb&quot; class=&quot;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p id=&quot;0413b57c-3f70-4c78-b5f6-36efe565630e&quot; class=&quot;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;http 는 문자메시지랑 비슷함. 문자를 보내야 답장을 해줌. (선톡없음)&lt;/span&gt;&lt;/p&gt;
&lt;p id=&quot;8cd718e9-e399-4187-baad-9a512697990b&quot; class=&quot;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;websocket은 요청하지 않아도 먼저 메시지를 받을 수 있고 요청을 먼저할 수도 있다.&lt;/span&gt;&lt;/p&gt;
&lt;p id=&quot;cf144741-23c9-4017-8fd8-00259f4cbee5&quot; class=&quot;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p class=&quot;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 id=&quot;557c9306-8459-4f5c-ba48-b3cf659e6e3d&quot; class=&quot;&quot; data-ke-size=&quot;size23&quot;&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;&lt;b&gt;연결 순서&lt;/b&gt;&lt;/span&gt;&lt;/h3&gt;
&lt;ol id=&quot;d4d61dd5-c595-42e8-8636-15a5f97373cc&quot; class=&quot;numbered-list&quot; style=&quot;list-style-type: disc;&quot; start=&quot;1&quot; type=&quot;1&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;유저가 http 형식으로 웹소켓 통신을 요청함(핸드 쉐이킹) &lt;/span&gt;&lt;br /&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;- 반드시 GET method를 통해서만 진행해야함.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;클라이언트와 서버간의 신원확인(인증)&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;서버에서 웹소켓으로 업그레이드&lt;/span&gt;&lt;br /&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;&amp;nbsp;- 프로토콜이 ws로 변환됨&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;양방향 웹소켓 통신 시작&lt;/span&gt;&lt;br /&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;&amp;nbsp; - 인코딩 utf-8 형식&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&quot;86b6af51-df2e-4380-a874-195075aa3695&quot; class=&quot;&quot; data-ke-size=&quot;size23&quot;&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;&lt;b&gt;실시간 네트워킹&lt;/b&gt;&lt;/span&gt;&lt;/h3&gt;
&lt;ol id=&quot;2c268a63-3b63-400a-99ff-120a2d01a796&quot; class=&quot;numbered-list&quot; style=&quot;list-style-type: disc;&quot; start=&quot;1&quot; type=&quot;1&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;웹 환경에서 연속된 데이터를 빠르게 노출&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;채팅,주식,비디오 데이터&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;여러 단말기에 빠르게 데이터를 교환&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 class=&quot;&quot; data-ke-size=&quot;size23&quot;&gt;&amp;nbsp;&lt;/h3&gt;
&lt;h3 id=&quot;b53e811f-fb34-4ef9-bf8f-b7c1c290215a&quot; class=&quot;&quot; data-ke-size=&quot;size23&quot;&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;&lt;b&gt;웹소켓 한계&lt;/b&gt;&lt;/span&gt;&lt;/h3&gt;
&lt;ul id=&quot;454ec70b-b699-4359-bf80-84ffae9c1063&quot; class=&quot;bulleted-list&quot; style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li style=&quot;list-style-type: disc;&quot;&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;문자열들을 주고 받을 수 있게 해줄 뿐 그 이상의 일은 하지않음.&lt;/span&gt;&lt;/li&gt;
&lt;li style=&quot;list-style-type: disc;&quot;&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;형식이 정해져 있지 않기 때문에 어플리케이션에서 쉡게 해석하기 힘들다.&lt;/span&gt;&lt;br /&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;&amp;nbsp;&lt;span style=&quot;color: #ef5369;&quot;&gt;&lt;b&gt;* 그래서 sub-protocol를 사용해서 주고 받는 메시지의 형태를 약속하는 경우가 많음&lt;/b&gt;&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p id=&quot;6caf2dda-4a00-4e33-966f-48dcd863c25f&quot; class=&quot;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;&lt;b&gt;STOMP(Simple Text Oriented Messagee Protocol)&lt;/b&gt;&lt;/span&gt;&lt;/p&gt;
&lt;ul id=&quot;ce0189c5-b77a-4385-9feb-a81e5a5e8779&quot; class=&quot;bulleted-list&quot; style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li style=&quot;list-style-type: disc;&quot;&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;sub-protocols 중 하나&lt;/span&gt;&lt;/li&gt;
&lt;li style=&quot;list-style-type: disc;&quot;&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;채팅 통신을 하기위한 형식을 정의&lt;/span&gt;&lt;/li&gt;
&lt;li style=&quot;list-style-type: disc;&quot;&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;HTTP와 유사하게 간단히 정의되어 해석하기 편한 프로토콜&lt;/span&gt;&lt;/li&gt;
&lt;li style=&quot;list-style-type: disc;&quot;&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;일반적으로 웹소켓 위에서 사용됨.&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p id=&quot;7872242c-1c88-4d4a-82a4-bef6c7f9f424&quot; class=&quot;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;br /&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;&lt;b&gt;Python FastAPI example code&lt;/b&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p id=&quot;431669cd-d996-40f2-bb4c-64e5c8157aa0&quot; class=&quot;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;&lt;a href=&quot;https://fastapi.tiangolo.com/advanced/websockets/&quot;&gt;https://fastapi.tiangolo.com/advanced/websockets/&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;pre id=&quot;b8ec2e78-d8a2-4c8e-a306-2ffd769f3e4a&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot;&gt;&lt;code&gt;from fastapi import FastAPI, WebSocket
from fastapi.responses import HTMLResponse

app = FastAPI()

html = &quot;&quot;&quot;
&amp;lt;!DOCTYPE html&amp;gt;
&amp;lt;html&amp;gt;
    &amp;lt;head&amp;gt;
        &amp;lt;title&amp;gt;Chat&amp;lt;/title&amp;gt;
    &amp;lt;/head&amp;gt;
    &amp;lt;body&amp;gt;
        &amp;lt;h1&amp;gt;WebSocket Chat&amp;lt;/h1&amp;gt;
        &amp;lt;form action=&quot;&quot; onsubmit=&quot;sendMessage(event)&quot;&amp;gt;
            &amp;lt;input type=&quot;text&quot; id=&quot;messageText&quot; autocomplete=&quot;off&quot;/&amp;gt;
            &amp;lt;button&amp;gt;Send&amp;lt;/button&amp;gt;
        &amp;lt;/form&amp;gt;
        &amp;lt;ul id='messages'&amp;gt;
        &amp;lt;/ul&amp;gt;
        &amp;lt;script&amp;gt;
            var ws = new WebSocket(&quot;ws://localhost:8000/ws&quot;);
            ws.onmessage = function(event) {
                var messages = document.getElementById('messages')
                var message = document.createElement('li')
                var content = document.createTextNode(event.data)
                message.appendChild(content)
                messages.appendChild(message)
            };
            function sendMessage(event) {
                var input = document.getElementById(&quot;messageText&quot;)
                ws.send(input.value)
                input.value = ''
                event.preventDefault()
            }
        &amp;lt;/script&amp;gt;
    &amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&quot;&quot;&quot;


@app.get(&quot;/&quot;)
async def get():
    return HTMLResponse(html)


@app.websocket(&quot;/ws&quot;)
async def websocket_endpoint(websocket: WebSocket):
    await websocket.accept()
    while True:
        data = await websocket.receive_text()
        await websocket.send_text(f&quot;Message text was: {data}&quot;)&lt;/code&gt;&lt;/pre&gt;
&lt;p id=&quot;29bcdf36-1b03-4316-9b37-9635fd990b92&quot; class=&quot;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p id=&quot;e705b598-7005-4ef0-a7bd-194b79e1a978&quot; class=&quot;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Nanum Gothic';&quot;&gt;실행 화면 ( &lt;a href=&quot;http://localhost:8000/&quot;&gt;localhost:8000/&lt;/a&gt; )&lt;/span&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;/article&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;960&quot; data-origin-height=&quot;550&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dlgOzd/btrOBMl70yr/kPgN4EVd862JraeyAmAagk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dlgOzd/btrOBMl70yr/kPgN4EVd862JraeyAmAagk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dlgOzd/btrOBMl70yr/kPgN4EVd862JraeyAmAagk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdlgOzd%2FbtrOBMl70yr%2FkPgN4EVd862JraeyAmAagk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;637&quot; height=&quot;365&quot; data-origin-width=&quot;960&quot; data-origin-height=&quot;550&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;</description>
      <category>Network</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/306</guid>
      <comments>https://acdongpgm.tistory.com/306#entry306comment</comments>
      <pubDate>Fri, 14 Oct 2022 16:52:05 +0900</pubDate>
    </item>
    <item>
      <title>[NLP]. 챗봇 답변 Top-k sampling 구현</title>
      <link>https://acdongpgm.tistory.com/304</link>
      <description>&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;TOP-K sampling 은 기존 생성 모델에서 사용하는 방법중 하나이다.&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://velog.io/@nawnoes/Top-p-%EC%83%98%ED%94%8C%EB%A7%81-aka.-Nucleus-Sampling&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://velog.io/@nawnoes/Top-p-%EC%83%98%ED%94%8C%EB%A7%81-aka.-Nucleus-Sampling&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1664262151218&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;Top-p Sampling (aka. Nucleus Sampling)&quot; data-og-description=&quot;How to sample from language models 을 보며 정리GPT-2로 텍스트를 생성하다보면, 랜덤 샘플링이나 Top-k 샘플링 등을 사용해도 문맥이 잘 맞지 않는다고 생각이 된다. 추가로 다른 방법 중 Top-p, Nucleus 샘플&quot; data-og-host=&quot;velog.io&quot; data-og-source-url=&quot;https://velog.io/@nawnoes/Top-p-%EC%83%98%ED%94%8C%EB%A7%81-aka.-Nucleus-Sampling&quot; data-og-url=&quot;https://velog.io/@nawnoes/Top-p-샘플링-aka.-Nucleus-Sampling&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/bRlpuC/hyPWAWIVWE/gXbpMkn6Vtl2Tnz9C8vbQ1/img.png?width=950&amp;amp;height=500&amp;amp;face=0_0_950_500,https://scrap.kakaocdn.net/dn/zfK9c/hyPVlz7s72/qAwkhyn3vDnVdOjH2t9aMk/img.png?width=1284&amp;amp;height=1136&amp;amp;face=0_0_1284_1136,https://scrap.kakaocdn.net/dn/xG9p8/hyPWKSzrsZ/paIhXnPI4KvTbNdfOkTm9k/img.png?width=1400&amp;amp;height=996&amp;amp;face=0_0_1400_996&quot;&gt;&lt;a href=&quot;https://velog.io/@nawnoes/Top-p-%EC%83%98%ED%94%8C%EB%A7%81-aka.-Nucleus-Sampling&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://velog.io/@nawnoes/Top-p-%EC%83%98%ED%94%8C%EB%A7%81-aka.-Nucleus-Sampling&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/bRlpuC/hyPWAWIVWE/gXbpMkn6Vtl2Tnz9C8vbQ1/img.png?width=950&amp;amp;height=500&amp;amp;face=0_0_950_500,https://scrap.kakaocdn.net/dn/zfK9c/hyPVlz7s72/qAwkhyn3vDnVdOjH2t9aMk/img.png?width=1284&amp;amp;height=1136&amp;amp;face=0_0_1284_1136,https://scrap.kakaocdn.net/dn/xG9p8/hyPWKSzrsZ/paIhXnPI4KvTbNdfOkTm9k/img.png?width=1400&amp;amp;height=996&amp;amp;face=0_0_1400_996');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;Top-p Sampling (aka. Nucleus Sampling)&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;How to sample from language models 을 보며 정리GPT-2로 텍스트를 생성하다보면, 랜덤 샘플링이나 Top-k 샘플링 등을 사용해도 문맥이 잘 맞지 않는다고 생각이 된다. 추가로 다른 방법 중 Top-p, Nucleus 샘플&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;velog.io&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;챗봇의 대답은 일정한 기준(Similarity score , BM25 score)점수에 대한 최고점수를 답변으로 추론하는 경우가 많은데.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이럴 경우 똑같은 질문을 했을 경우 계속해서 같은 답변만 하게 된다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;예를 들면.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&quot;밥 뭐 먹었어?&quot; 의 질문의 경우는 한 가지 음식(떡볶이)만 계속 대답하게 되는 경우가 발생하는 것.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그래서 우리는 TopK sampling 방식을 사용해서 다양한 답변을 할 수 있도록 하였다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;TopK sampling 과 Ramdom sampling 의 차이점은&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Random 의 경우는 Top5 로 가정했을때 각각의 등장 확률을 25%로 균등하게 분배하는 반면&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;TopK sampling은 점수가 높을 수록 등장확률을 높혀주는 방식이다.&lt;/p&gt;
&lt;table style=&quot;border-collapse: collapse; width: 100%; height: 76px;&quot; border=&quot;1&quot; data-ke-align=&quot;alignLeft&quot;&gt;
&lt;tbody&gt;
&lt;tr style=&quot;height: 19px;&quot;&gt;
&lt;td style=&quot;width: 33.3333%; height: 19px;&quot;&gt;질문(Q)&lt;/td&gt;
&lt;td style=&quot;width: 33.3333%; height: 19px;&quot;&gt;답변 후보(TOP3)&lt;/td&gt;
&lt;td style=&quot;width: 33.3333%; height: 19px;&quot;&gt;Softmax value ( 확률 값 )&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 19px;&quot;&gt;
&lt;td style=&quot;width: 33.3333%; height: 57px;&quot; rowspan=&quot;3&quot;&gt;&lt;b&gt;밥 뭐먹었어?&amp;nbsp;&lt;/b&gt;&lt;/td&gt;
&lt;td style=&quot;width: 33.3333%; height: 19px;&quot;&gt;&lt;b&gt;난 떡볶이 먹었다 : 90점&amp;nbsp;&lt;/b&gt;&lt;/td&gt;
&lt;td style=&quot;width: 33.3333%; height: 19px;&quot;&gt;&lt;b&gt;43.19%&lt;/b&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 19px;&quot;&gt;
&lt;td style=&quot;width: 33.3333%; height: 19px;&quot;&gt;&lt;b&gt;든든한 국밥 먹었지롱 : 70점&amp;nbsp;&lt;/b&gt;&lt;/td&gt;
&lt;td style=&quot;width: 33.3333%; height: 19px;&quot;&gt;&lt;b&gt;35.36%&lt;/b&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 19px;&quot;&gt;
&lt;td style=&quot;width: 33.3333%; height: 19px;&quot;&gt;&lt;b&gt;난 아직 안먹었는데 넌 먹었어? : 20점&lt;/b&gt;&lt;/td&gt;
&lt;td style=&quot;width: 33.3333%; height: 19px;&quot;&gt;&lt;b&gt;21.45%&lt;/b&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;질문에 대한 Top3 답변이 있다고 가정할때 softmax 함수를 사용해서&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 점수에대한 비율로 값을 변경하여 이 값을 확률 값으로 사용한다.&lt;/p&gt;
&lt;pre class=&quot;json&quot;&gt;&lt;code&gt;[0.4319, 0.3536, 0.2145]&lt;/code&gt;&lt;/pre&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot; /&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;TOP5 example code&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1664261816174&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;import torch.nn.functional as F
import torch

def top_k_sampling(score_list: List[int], answer_list: List[str]):
    score_list = torch.tensor(score_list)
    softmax_list = F.softmax(score_list, dim=0)
    if len(softmax_list) != 5:
        zeros = torch.zeros(1, 5 - len(softmax_list))[0]
        softmax_list = torch.cat([softmax_list, zeros], dim=0)
    random_value = random()
    range1 = softmax_list[0]
    range2 = range1 + softmax_list[1]
    range3 = range2 + softmax_list[2]
    range4 = range3 + softmax_list[3]
    if random_value &amp;lt;= range1:
        answer = answer_list[0]
    elif random_value &amp;gt; range1 and random_value &amp;lt;= range2:
        answer = answer_list[1]
    elif random_value &amp;gt; range2 and random_value &amp;lt;= range3:
        answer = answer_list[2]
    elif random_value &amp;gt; range3 and random_value &amp;lt;= range4:
        answer = answer_list[3]
    else:
        answer = answer_list[4]
    return answer&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;+@ 좀 더 쉽게 구현하는 방법 발견 (위 코드는 TOP_K 가 제한적임)&lt;/b&gt;&lt;/h4&gt;
&lt;pre id=&quot;code_1671160090313&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;def softmax(x):
    f_x = np.exp(x) / np.sum(np.exp(x))
    return f_x

def top_k_sampling(score_list: List[int], weight: int = 1):
    score_list = [i * weight for i in score_list]
    softmax_list = softmax(score_list)
    pick = random.choices(range(len(score_list)),
                          weights=softmax_list)

    return pick[0]&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;* weight 는 score_list 의 값들이 차이가 크게 없을 경우 분산을 늘려주기 위해 사용하면됩니다.&lt;/p&gt;</description>
      <category>Machine learning/NLP</category>
      <category>answer selection</category>
      <category>nlp</category>
      <category>sampling</category>
      <category>softmax</category>
      <category>topk</category>
      <category>topk sampling</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/304</guid>
      <comments>https://acdongpgm.tistory.com/304#entry304comment</comments>
      <pubDate>Tue, 27 Sep 2022 16:02:49 +0900</pubDate>
    </item>
    <item>
      <title>[GCP]. Google Cloud Plattform Server 초기셋팅(feat. CentOS)</title>
      <link>https://acdongpgm.tistory.com/302</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;1. htop 설치&lt;/p&gt;
&lt;pre id=&quot;code_1662188222623&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;# yum update
$ sudo yum -y update

# yum 패키지 매니저는 htop이 기본으로 포함되어있지 않기 때문에
# EPEL repository를 추가한다.
$ sudo yum -y install epel-release

# htop 설치
$ sudo yum -y install htop&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;2. unzip 설치&lt;/p&gt;
&lt;pre id=&quot;code_1662188716491&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;$ rpm -qa | grep unzip

$ yum list unzip

$ sudo yum install -y unzip&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;3. docker 설치&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;Linux&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1662188841104&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;curl -s https://get.docker.com/ | sudo sh&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;명령어를 입력하고 패스워드를 입력하면 리눅스 배포판에 따라 자동으로 최신번전의 도커를 설치&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;유저 권한 추가&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1662188841105&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;sudo usermod -aG docker {username}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;도커 실행&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1662189274804&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;sudo systemctl start docker&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;도커 권한 변경&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1662188841106&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;sudo chmod 666 /var/run/docker.sock&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;도커 컴포즈 설치&lt;/p&gt;
&lt;pre id=&quot;code_1662192365970&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;sudo curl -L &quot;https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)&quot; -o /usr/local/bin/docker-compose

sudo chmod +x /usr/local/bin/docker-compose

sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

docker-compose -version&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>MLops/AWS</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/302</guid>
      <comments>https://acdongpgm.tistory.com/302#entry302comment</comments>
      <pubDate>Sat, 3 Sep 2022 16:14:59 +0900</pubDate>
    </item>
    <item>
      <title>[Python]. 현재 파일 디렉터리 절대경로 불러오기</title>
      <link>https://acdongpgm.tistory.com/301</link>
      <description>&lt;pre id=&quot;code_1662017784219&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;from pathlib import Path

BASE_DIR = Path(__file__).resolve().parent&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre id=&quot;code_1669784879187&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;FILE = os.path.dirname(os.path.realpath(__file__))&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre id=&quot;code_1669784896100&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;BASE_DIR = os.getcwd()&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre id=&quot;code_1669895808824&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;from pathlib import Path
import os
import json

BASE_DIR = Path(__file__).resolve().parent.parent
DATA_PATH = os.path.join(BASE_DIR,'data','base_datasets.xlsx')
MAPPING_PATH = os.path.join(BASE_DIR,'mapping.json')

mapping = json.load(open(MAPPING_PATH))&lt;/code&gt;&lt;/pre&gt;</description>
      <category>Python</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/301</guid>
      <comments>https://acdongpgm.tistory.com/301#entry301comment</comments>
      <pubDate>Thu, 1 Sep 2022 16:36:27 +0900</pubDate>
    </item>
    <item>
      <title>[ElasticSearch]. 다중 검색 종류 비교 : 비동기 검색(async_search) VS 멀티 서치(msearch)</title>
      <link>https://acdongpgm.tistory.com/293</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;ElasticSearch 서비스를 개발하는 과정에 있어서&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여러 개의 쿼리를 필연적으로 한번에 날려야하는 순간이 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;우리 서비스도 동시에 3개의 쿼리를 날려서 결과를 취합하는 과정이 있는데.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;처음(비동기 프로그래밍을 모를때)에는 그냥 3개의 쿼리를 순차적으로 날려서 결과값을 받았고&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 과정에서 많은 시간적 비용이 소모되었다.&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이번 포스팅에서는 쿼리 여러개를 효과적으로 처리하는 방법 2가지를 소개하고 성능을 비교해보고자한다.&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot; /&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;Async Search : 비동기 검색&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;비동기 검색은 엘라스틱서치 쿼리를 비동기로 처리하는 방법이다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;쿼리 3개의 요청을 동시(Concurrency)에 날리고 응답이 빠른 것 부터 받아서 처리하는 방법.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;3개의 쿼리중 가장 느린 검색속도가 최종 검색속도라고 볼 수 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre id=&quot;code_1661240895093&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;from elasticsearch import AsyncElasticsearch
import time
import asyncio

es_client = AsyncElasticsearch(**ES_CLIENT)

query1 = {
    &quot;size&quot;: 100,
    &quot;query&quot;: {
        &quot;match&quot;: {&quot;items1&quot;: &quot;삼성전자&quot;}}
}
query2 = {
    &quot;size&quot;: 100,
    &quot;query&quot;: {
        &quot;match&quot;: {&quot;items2&quot;: &quot;LG세탁기&quot;}}
}
query3 = {
    &quot;size&quot;: 100,
    &quot;query&quot;: {
        &quot;match&quot;: {&quot;items3&quot;: &quot;KT휴대폰&quot;}}
}

async def es_search(query):
    result = await es_client.search(index=&quot;index&quot;, body=query)
    return result


async def main():

    result = await asyncio.gather(
        es_search(query1),
        es_search(query2),
        es_search(query3),
    )

    print(result)


if __name__ == &quot;__main__&quot;:
    start = time.time()
    asyncio.run(main())
    end = time.time()
    print(end - start)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Await 키워드를 사용해서 함수를 실행하려면 반드시 비동기로 함수로 작성된 함수여야만 한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그 과정에서 Elasticsearch 클라이언트가 아닌 AsyncElasticsearch 클라이언트를 사용해야만 비동기 검색을 할 수 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot; /&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;msearch : 멀티 서치&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;멀티서치는 엘라스틱서치 서비스에 존재하는 다중 검색 기능이다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여러개의 검색 쿼리를 묶어서 한번에(at the same time)요청하면 그 값을 리스트로 구분하여 리턴해준다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre id=&quot;code_1661241277187&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;from elasticsearch import Elasticsearch
import time

body = [{&quot;index&quot;: &quot;index&quot;},
        {
    &quot;size&quot;: 100,
            &quot;query&quot;: {
                &quot;match&quot;: {&quot;items1&quot;: &quot;삼성전자&quot;}}
},
    {&quot;index&quot;: &quot;index&quot;},
    {
    &quot;size&quot;: 100,
            &quot;query&quot;: {
                &quot;match&quot;: {&quot;items2&quot;: &quot;LG세탁기&quot;}}
},
    {&quot;index&quot;: &quot;index&quot;},
    {
    &quot;size&quot;: 100,
            &quot;query&quot;: {
                &quot;match&quot;: {&quot;items3&quot;: &quot;KT휴대폰&quot;}}
}]

def es_msearch(body):
    result = es_client.msearch(index=&quot;index&quot;, body=body)
    print(result)


def main():
    es_msearch(body=body)


if __name__ == &quot;__main__&quot;:
    start = time.time()
    print(main())
    end = time.time()
    print(end - start)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style6&quot; /&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;속도 비교&lt;/b&gt;&lt;/h4&gt;
&lt;table style=&quot;border-collapse: collapse; width: 99.1861%; height: 68px;&quot; border=&quot;1&quot; data-ke-align=&quot;alignLeft&quot;&gt;
&lt;tbody&gt;
&lt;tr style=&quot;height: 17px;&quot;&gt;
&lt;td style=&quot;width: 25%; height: 17px;&quot;&gt;search size&lt;/td&gt;
&lt;td style=&quot;width: 18.8372%; height: 17px;&quot;&gt;100&lt;/td&gt;
&lt;td style=&quot;width: 17.093%; height: 17px;&quot;&gt;1000&lt;/td&gt;
&lt;td style=&quot;width: 19.7132%; height: 17px;&quot;&gt;10000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 17px;&quot;&gt;
&lt;td style=&quot;width: 25%; height: 17px;&quot;&gt;Base search API time(s)&lt;/td&gt;
&lt;td style=&quot;width: 18.8372%; height: 17px;&quot;&gt;0.399&lt;/td&gt;
&lt;td style=&quot;width: 17.093%; height: 17px;&quot;&gt;1.795&lt;/td&gt;
&lt;td style=&quot;width: 19.7132%; height: 17px;&quot;&gt;6.344&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 17px;&quot;&gt;
&lt;td style=&quot;width: 25%; height: 17px;&quot;&gt;Async search API time(s)&lt;/td&gt;
&lt;td style=&quot;width: 18.8372%; height: 17px;&quot;&gt;0.272&lt;/td&gt;
&lt;td style=&quot;width: 17.093%; height: 17px;&quot;&gt;0.974&lt;/td&gt;
&lt;td style=&quot;width: 19.7132%; height: 17px;&quot;&gt;4.560&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 17px;&quot;&gt;
&lt;td style=&quot;width: 25%; height: 17px;&quot;&gt;Multi search API&amp;nbsp; time(s)&lt;/td&gt;
&lt;td style=&quot;width: 18.8372%; height: 17px;&quot;&gt;0.345&lt;/td&gt;
&lt;td style=&quot;width: 17.093%; height: 17px;&quot;&gt;0.937&lt;/td&gt;
&lt;td style=&quot;width: 19.7132%; height: 17px;&quot;&gt;4.910&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;780&quot; data-origin-height=&quot;472&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bWTIm5/btrKqEdoQvZ/ZrpihLXuu9kNampSl9RMl1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bWTIm5/btrKqEdoQvZ/ZrpihLXuu9kNampSl9RMl1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bWTIm5/btrKqEdoQvZ/ZrpihLXuu9kNampSl9RMl1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbWTIm5%2FbtrKqEdoQvZ%2FZrpihLXuu9kNampSl9RMl1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;780&quot; height=&quot;472&quot; data-origin-width=&quot;780&quot; data-origin-height=&quot;472&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;검색량이 증가할수록 방법에 따라 속도가 개선되는 것을 알 수 있었다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Msearch 와 AsyncSearch 의 성능은 미세한 차이로 AsyncSearch 가 좋았다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;엘라스틱서치를 사용할때 여러개의 쿼리를 한번에 날려야할 경우가 있다면 위 자료가 도움이 되길바란다.&lt;/p&gt;</description>
      <category>API/ElasticSearch</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/293</guid>
      <comments>https://acdongpgm.tistory.com/293#entry293comment</comments>
      <pubDate>Tue, 23 Aug 2022 17:13:04 +0900</pubDate>
    </item>
    <item>
      <title>[절약]. 프렌차이즈 상품 기프티콘 구매(경매)로 저렴하게 먹기</title>
      <link>https://acdongpgm.tistory.com/291</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;i&gt;나는 투썸플레이스 매장에 자주 가곤한다.&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;i&gt;가는 이유는 커피가 맛있어서가 아니라.. ㅋㅋ&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;i&gt;자리가 넓어 눈치안보고 작업할 수 있고&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;i&gt;공부하는 사람들이 많아 긍정적인 에너지를 받기 때문이다.&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;i&gt;일주일에 한 번은 꼭 오는 것 같다.&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;i&gt;하지만 문제는 커피 가격이다. 현재 아메리카노(R)의 가격은 4500원이다.&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;i&gt;이번 포스팅은 다들 잘 알고있지만 상품대신 기프티콘을 구매하여 15% 할인된 가격으로&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;i&gt;상품을 구매할 수 있는 방법을 공유하고자한다.&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot; /&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;여러가지 기프티콘 경매 서비스들이 있지만&amp;nbsp;&lt;/b&gt;&lt;/h4&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;나는 신한은행(SOL)어플에 있는 모바일 쿠폰 마켓 (기프티스타) 를 사용하고있다.&lt;/b&gt;&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;edited_KakaoTalk_Photo_2022-08-21-15-27-51 003.jpeg&quot; data-origin-width=&quot;648&quot; data-origin-height=&quot;1270&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/m1EyJ/btrKekswcxn/QRzvso3wMh8XbUFSA7m4Zk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/m1EyJ/btrKekswcxn/QRzvso3wMh8XbUFSA7m4Zk/img.png&quot; data-alt=&quot;기프티 스타 접속&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/m1EyJ/btrKekswcxn/QRzvso3wMh8XbUFSA7m4Zk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fm1EyJ%2FbtrKekswcxn%2FQRzvso3wMh8XbUFSA7m4Zk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;243&quot; height=&quot;1270&quot; data-filename=&quot;edited_KakaoTalk_Photo_2022-08-21-15-27-51 003.jpeg&quot; data-origin-width=&quot;648&quot; data-origin-height=&quot;1270&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;기프티 스타 접속&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;신한은행(SOL)어플에서 쿠폰 사고팔기 메뉴를 클릭하면 쉽게 접속이 가능하다.&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt; &lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;edited_KakaoTalk_Photo_2022-08-21-15-27-51 002.jpeg&quot; data-origin-width=&quot;648&quot; data-origin-height=&quot;1305&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dEEmaH/btrKeu9NSjV/8EUNrTMYqJhpKe3GxbVWA1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dEEmaH/btrKeu9NSjV/8EUNrTMYqJhpKe3GxbVWA1/img.png&quot; data-alt=&quot;원하는 브랜드 상품 검색하여 구매&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dEEmaH/btrKeu9NSjV/8EUNrTMYqJhpKe3GxbVWA1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdEEmaH%2FbtrKeu9NSjV%2F8EUNrTMYqJhpKe3GxbVWA1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;255&quot; height=&quot;567&quot; data-filename=&quot;edited_KakaoTalk_Photo_2022-08-21-15-27-51 002.jpeg&quot; data-origin-width=&quot;648&quot; data-origin-height=&quot;1305&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;원하는 브랜드 상품 검색하여 구매&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;신한은행을 이용하는 사람은 자동으로 계좌를 연동해서 쉽게 구매를 진행할 수 있다.&lt;/b&gt;&lt;/p&gt;</description>
      <category>Economy/Saving Money</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/291</guid>
      <comments>https://acdongpgm.tistory.com/291#entry291comment</comments>
      <pubDate>Sun, 21 Aug 2022 15:32:21 +0900</pubDate>
    </item>
    <item>
      <title>[날짜]. datetime으로 무슨 요일인지 구하기</title>
      <link>https://acdongpgm.tistory.com/290</link>
      <description>&lt;pre id=&quot;code_1660978929237&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;from datetime import datetime

def datetime_to_weekday(datetime_str : str) -&amp;gt; None:
    &quot;&quot;&quot;
    ex ) datetime_str : &quot;19-02-03&quot;
    &quot;&quot;&quot;
    date_time = datetime.strptime(datetime_str, '%y-%m-%d')
    date_type = date_time.date()
    weekday = date_type.weekday()

    int_to_week = {0: &quot;월&quot;,
                1: &quot;화&quot;,
                2: &quot;수&quot;,
                3: &quot;목&quot;,
                4: &quot;금&quot;,
                5: &quot;토&quot;,
                6: &quot;일&quot;}

    print(int_to_week[weekday] + &quot;요일&quot;)

datetime_to_weekday(&quot;2022-04-21&quot;)&lt;/code&gt;&lt;/pre&gt;</description>
      <category>Python</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/290</guid>
      <comments>https://acdongpgm.tistory.com/290#entry290comment</comments>
      <pubDate>Sat, 20 Aug 2022 16:02:34 +0900</pubDate>
    </item>
    <item>
      <title>[error]. arm64 에서 Mecab 설치시 에러 해결법</title>
      <link>https://acdongpgm.tistory.com/289</link>
      <description>&lt;p&gt;&lt;figure class=&quot;fileblock&quot; data-ke-align=&quot;alignCenter&quot;&gt;&lt;a href=&quot;https://blog.kakaocdn.net/dn/blSnJI/btrJELrrH6w/w1FLlVL7yCqZZc2WNqWfa1/mecab_install.sh?attach=1&amp;amp;knm=tfile.sh&quot; class=&quot;&quot;&gt;
    &lt;div class=&quot;image&quot;&gt;&lt;/div&gt;
    &lt;div class=&quot;desc&quot;&gt;&lt;div class=&quot;filename&quot;&gt;&lt;span class=&quot;name&quot;&gt;mecab_install.sh&lt;/span&gt;&lt;/div&gt;
&lt;div class=&quot;size&quot;&gt;0.01MB&lt;/div&gt;
&lt;/div&gt;
  &lt;/a&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;aarch64&amp;nbsp;configure:&amp;nbsp;error:&amp;nbsp;cannot&amp;nbsp;guess&amp;nbsp;build&amp;nbsp;type;&amp;nbsp;you&amp;nbsp;must&amp;nbsp;specify&amp;nbsp;one&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;문제 발생 : &lt;/b&gt;&lt;/h4&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;&amp;nbsp;현재 EC2 메모리 최적화 인스턴스 ( arm64 ) 에서 konlpy 형태소 분석기 Mecab 설치시 오류 발생&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;리눅스에서 Konlpy 설치를 할때 다른 형태소 분석기는 pip로도 설치가 되지만 Mecab 은 따로 조치를 취해줘야한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;공식 홈페이지에서는 git에 연동된 bash 파일을 읽어서 실행시키도록 권장하고있다.&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://konlpy.org/en/latest/install/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://konlpy.org/en/latest/install/&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1660461873682&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;website&quot; data-og-title=&quot;Installation &amp;mdash; KoNLPy 0.6.0 documentation&quot; data-og-description=&quot;Ubuntu Supported: Xenial(16.04.3 LTS), Bionic(18.04.3 LTS), Disco(19.04), Eoan(19.10) Install dependencies # Install Java 1.8 or up $ sudo apt-get install g++ openjdk-8-jdk python3-dev python3-pip curl Install KoNLPy $ python3 -m pip install --upgrade pip &quot; data-og-host=&quot;konlpy.org&quot; data-og-source-url=&quot;https://konlpy.org/en/latest/install/&quot; data-og-url=&quot;https://konlpy.org/en/latest/install/&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/mkJC0/hyPq0oIq3z/b1n9iwBdKPyt1Kk4utHTw1/img.png?width=1056&amp;amp;height=689&amp;amp;face=0_0_1056_689&quot;&gt;&lt;a href=&quot;https://konlpy.org/en/latest/install/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://konlpy.org/en/latest/install/&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/mkJC0/hyPq0oIq3z/b1n9iwBdKPyt1Kk4utHTw1/img.png?width=1056&amp;amp;height=689&amp;amp;face=0_0_1056_689');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;Installation &amp;mdash; KoNLPy 0.6.0 documentation&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;Ubuntu Supported: Xenial(16.04.3 LTS), Bionic(18.04.3 LTS), Disco(19.04), Eoan(19.10) Install dependencies # Install Java 1.8 or up $ sudo apt-get install g++ openjdk-8-jdk python3-dev python3-pip curl Install KoNLPy $ python3 -m pip install --upgrade pip&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;konlpy.org&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;pre id=&quot;code_1660461919745&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;bash &amp;lt;(curl -s https://raw.githubusercontent.com/konlpy/konlpy/master/scripts/mecab.sh)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;하지만 arm64 의 경우 위 명령어를 실행시키면&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;aarch64 configure: error: cannot guess build type; you must specify one&lt;/b&gt;&lt;/span&gt;&lt;b&gt;&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;에러를 발생시킨다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이에 대한 해결법은&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;&lt;span style=&quot;background-color: #ffffff; color: #000000;&quot;&gt;./configure --build=aarch64-unknown-linux-gnu 를 적용하면 된다고 한다.&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;background-color: #ffffff; color: #000000;&quot;&gt;이렇게 얘기하면 뭘 어떻게해야할지 몰라 좀 더 연구해본 결과&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;background-color: #ffffff; color: #000000;&quot;&gt;위 bash 파일에 있는 ./configure 명령어 뒤에 옵션을 추가하라는 뜻이였다.&lt;/span&gt;&lt;span style=&quot;background-color: #ffffff; color: #000000;&quot;&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;background-color: #ffffff; color: #000000;&quot;&gt;즉 위 파일을 그대로 읽어들이지말고 복사해서 파일로 저장한뒤 수정하고 그 bash 파일을 실행시키면 된다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;background-color: #ffffff; color: #000000;&quot;&gt;수정 완료 후 문제없이 설치완료!!&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;i&gt;&lt;span style=&quot;background-color: #ffffff; color: #000000;&quot;&gt;어려우신 분들을 위해 bash 파일은 업로드 해두겠습니다.&lt;/span&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>Error Note</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/289</guid>
      <comments>https://acdongpgm.tistory.com/289#entry289comment</comments>
      <pubDate>Sun, 14 Aug 2022 16:31:58 +0900</pubDate>
    </item>
    <item>
      <title>[ElasticSearch]. 크론탭(crontab)으로 리눅스 서버 꺼지면 자동 재실행 하기</title>
      <link>https://acdongpgm.tistory.com/288</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;나는 현재 엘라스틱 서치 8.1.2 버전을 사용중이고 3개의 클라우드 서버(AWS EC2)를 바인딩하여 사용중이다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;elasticsearch node 1 : 4GB , 2cpu&amp;nbsp; &lt;span style=&quot;background-color: #f1faff; color: #16191f;&quot;&gt;t3.medium&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;elasticsearch node 2 : 2GB , 2cpu &lt;span style=&quot;background-color: #f1faff; color: #16191f;&quot;&gt;t3.small&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;elasticsearch node 3 : 2GB , 2cpu &lt;span style=&quot;background-color: #f1faff; color: #16191f;&quot;&gt;t3.small&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;문제 발생 : &lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;- 상대적으로 저 스펙인 (노드2) 와 (노드3) 서버가 자주 꺼지는 문제 발생&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;문제를 해결하기 위해 elasticsearch.log 를 살펴보았지만 원인을 찾기 어려웠다.. 계속 디버깅중...&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그래서 임시방편이기도 하면서 제일 간단한 방법인 &lt;span style=&quot;color: #006dd7;&quot;&gt;&lt;b&gt;죽으면 자동으로 다시 실행시키는 방법&lt;/b&gt;&lt;/span&gt;을 적용해보기로했다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;리눅스에는 &lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;크론탭(crontab)&lt;/b&gt;&lt;/span&gt;이라고해서 자동으로 스케줄링해주는 도구가 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이를 활용해서 적용해보고자 한다.&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style6&quot; /&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;Bash 파일 ( 쉘 스크립트 작성하기 )&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;크론탭을 적용하기 위해선 일단 커멘드 명령어의 집합인 bash 파일을 작성해야한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;엘라스틱 서치 백그라운드(데몬)실행&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1660457359157&quot; class=&quot;shell&quot; data-ke-language=&quot;shell&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;vi start.sh&lt;/code&gt;&lt;/pre&gt;
&lt;pre id=&quot;code_1660457276344&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;/home/{user}/{es_dir}/bin/elasticsearch -d -p es.pid&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;엘라스틱 서치 재실행&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1660457393972&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;vi restart.sh&lt;/code&gt;&lt;/pre&gt;
&lt;pre id=&quot;code_1660457232406&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;#!/bin/sh

# 엘라스틱 서치로 시작하는 프로세스 확인
pid=`ps -ef | grep &quot;elasticsearch&quot; | grep -v 'grep' | awk '{print $2}'`

# 엘라스틱 서치 프로세스가 죽었으면 start.sh 실행
if [ -z $pid ]; 
then
	echo $(date)
	bash /home/{user}/{es_dir}/start.sh
	echo &quot;&quot;
fi&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;실행권한 변경 ( 실행 권한이 있어야 실행할 수 있다. )&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1660457498602&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;chmod 755 *.sh&lt;/code&gt;&lt;/pre&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot; /&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;크론탭(crontab) 적용하기&lt;/b&gt;&lt;/h4&gt;
&lt;pre id=&quot;code_1660457977776&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;crontab -e&lt;/code&gt;&lt;/pre&gt;
&lt;pre id=&quot;code_1660458258724&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;0 */1 * * * bash /home/{user}/{es_dir}/restart.sh &amp;gt;&amp;gt; /home/{user}/{es_dir}/elastic_restart.log&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;위 크론탭 명령어의 해석은&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;매시간(정각)마다 restart.sh 파일을 실행시키라는 명령어이고 실행 로그를 저장하는 명령어이다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;크론탭 명령어의 자세한 내용은 다른 자료를 찾아보시면 다양하게 나와있습니다.&lt;/p&gt;</description>
      <category>API/ElasticSearch</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/288</guid>
      <comments>https://acdongpgm.tistory.com/288#entry288comment</comments>
      <pubDate>Sun, 14 Aug 2022 15:28:01 +0900</pubDate>
    </item>
    <item>
      <title>[챗봇] faiss로 빠르게 유사도 검색하기(Similarity Search)</title>
      <link>https://acdongpgm.tistory.com/286</link>
      <description>&lt;h4 data-ke-size=&quot;size20&quot;&gt;Faiss 는 Facebook AI 에서 개발한 유사도 검색 모델이다.&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;밀집 벡터의 효율적인 유사성 검색 및 클러스터링을 위한 라이브러리입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Faiss 는 numpy 나 torch 에서 제공해주는 cosine_similarity 보다 훨씬 빠릅니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;빠른 이유는 벡터들 간의 연관성까지 포함하여 임베딩 정보를 가지고 있기 때문입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;즉,&amp;nbsp; 벡터 길이(n)가 고정되어 있을 경우에는 빠른속도를 기대할 수 있지만&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;매번 다른 벡터들일 경우엔 매번 벡터간 연관성 계산 + 유사도 계산이 필요하기 때문에 기존&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;numpy 함수를 사용하는 것이 더 빠릅니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;즉 faiss 는 길이(n)를 제한하는 대신 미리 벡터끼리의 정보를 계산해두고 그것을 이용하여 빠르게 검색합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;( 틀리면 마구마구 지적 부탁드립니다. )&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 통해 알아보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot; /&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;Faiss 설치 방법&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1658127331391&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;# CPU-only version
$ conda install -c pytorch faiss-cpu

# GPU(+CPU) version
$ conda install -c pytorch faiss-gpu

# or for a specific CUDA version
$ conda install -c pytorch faiss-gpu cudatoolkit=10.2 # for CUDA 10.2&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;데이터 셋 : &lt;a href=&quot;https://github.com/songys/Chatbot_data&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://github.com/songys/Chatbot_data&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;pairwise 모델 : &lt;a href=&quot;https://huggingface.co/Huffon/sentence-klue-roberta-base&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://huggingface.co/Huffon/sentence-klue-roberta-base&lt;/a&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;준비단계&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;(1). 데이터셋에 질문(Q)를 pairwise 모델로 전부 임베딩&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;(2). 임베딩 벡터들을 저장 (.npy)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;패키지 로드&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1658127613163&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;import pandas as pd
import numpy as np
import faiss
from sentence_transformers import SentenceTransformer , util&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;데이터 로드&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1658127677291&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;#데이터 불러오기
df = pd.read_csv(&quot;./ChatbotData.csv&quot;)

#임베딩 벡터 불러오기
embeddings = np.load(&quot;./embeddings.npy&quot;)

#변환기 불러오기
embedder = SentenceTransformer(&quot;Huffon/sentence-klue-roberta-base&quot;)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;불러온 벡터의 차원이 맞는지 확인합니다.&lt;/p&gt;
&lt;pre id=&quot;code_1658127786696&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;# 벡터 차원 확인하기
embeddings.shape&lt;/code&gt;&lt;/pre&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;(103675, 768)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;Faiss 적용하기&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1658127827834&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;# faiss 계산하기

index = faiss.IndexFlatL2(embeddings.shape[1]) # 초기화 : 벡터의 크기를 지정
index.add(embeddings) # 임베딩을 추가&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여기서 IndexFlatL2 는 각 벡터의 유사성 또는 각 벡터의 근접성을 측정하는 데 사용하는 거리 메트릭입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여기서 L2 는 유클리드 거리를 의미합니다.&lt;/p&gt;
&lt;pre id=&quot;code_1658128259051&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;index.is_trained # 추가가 되었는지 확인하기
index.ntotal # 임베딩 수 확인하기&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;유사도 검색&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1658128309718&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;top_k = 100
query = &quot;사랑했던 사람이 떠났어&quot;
query_embedding = embedder.encode(query, normalize_embeddings=True ,convert_to_tensor=True)

distances, indices = index.search(np.expand_dims(query_embedding,axis=0),top_k)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;유사도 검색은 쉽게 index.search 를 사용하면 할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;결과는 튜플의 형태로 나오게 되는데 앞에(distances)는 각각의 벡터들의 거리가 저장되고,&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;뒤에(indices)는 해당하는 인덱스를 반환됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;top_k 는 몇개를 추출할 것인지에 대한 옵션이고 자동으로 거리가 가까운 순서로 정렬되어 반환됩니다.&lt;/p&gt;
&lt;pre id=&quot;code_1658128829621&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;# 결과 확인
temp = df.iloc[indices[0]]
#temp
temp['distances'] = distances[0]
temp[['user','system','distances']].head(10)&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;966&quot; data-origin-height=&quot;896&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/BfFLz/btrHxVcSxK8/BTOchpefe1y6CrTHjuDFn0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/BfFLz/btrHxVcSxK8/BTOchpefe1y6CrTHjuDFn0/img.png&quot; data-alt=&quot;검색 결과&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/BfFLz/btrHxVcSxK8/BTOchpefe1y6CrTHjuDFn0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FBfFLz%2FbtrHxVcSxK8%2FBTOchpefe1y6CrTHjuDFn0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;451&quot; height=&quot;418&quot; data-origin-width=&quot;966&quot; data-origin-height=&quot;896&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;검색 결과&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;pre id=&quot;code_1658129436214&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;#index 저장하기
faiss.write_index(index,&quot;sts.index&quot;)

#인덱스 불러오기
index = faiss.read_index(&quot;./sts.index&quot;)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;한번 계산한 index 는 저장하여 다음번에 불러오기만 해서 사용할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;속도 테스트 결과&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;총 10만개의 벡터를 전부 cosine_similarity를 했을 경우에는 &lt;b&gt;10초&lt;/b&gt; 정도 결렸고&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;faiss 모델은 &lt;b&gt;0.2~0.3초&lt;/b&gt; 소요되었다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;i&gt;검색기반 챗봇은 검색 성능과 속도가 가장 중요한데 faiss 라이브러리를 사용하여 엄청난 성능향상을 기대할 수 있었습니다.&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;i&gt;참고 : &lt;a href=&quot;https://www.youtube.com/watch?v=sKyvsdEv6rk&amp;amp;t=22s&quot;&gt;https://www.youtube.com/watch?v=sKyvsdEv6rk&amp;amp;t=22s&lt;/a&gt;&amp;nbsp;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp; &lt;a href=&quot;https://github.com/facebookresearch/faiss/wiki/Getting-started&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://github.com/facebookresearch/faiss/wiki/Getting-started&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://github.com/facebookresearch/faiss/blob/main/INSTALL.md&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;&amp;nbsp; https://github.com/facebookresearch/faiss/blob/main/INSTALL.md&lt;/a&gt;&lt;/p&gt;</description>
      <category>Machine learning/Chatbot</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/286</guid>
      <comments>https://acdongpgm.tistory.com/286#entry286comment</comments>
      <pubDate>Mon, 18 Jul 2022 16:29:04 +0900</pubDate>
    </item>
    <item>
      <title>[NLP]. 임베딩 벡터(embedding vector)를 문자열로 저장하는 방법(feat. byte type , base85 )</title>
      <link>https://acdongpgm.tistory.com/284</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;Bi-Encoder의 장점은 임베딩한 벡터를 미리 연산을 해두어 속도가 빠르다는 장점이 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여기서 임베딩값들을 메모리(RAM) 자원에 가지고 있는 것이 가장 좋지만 전 포스팅에서 이야기하였듯이 높은 메모리는 곧 서버 비용이다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;비용 절감을 위해서 벡터를 문자열로 변환해서 RDBMS에 저장해서 불러오는 방식으로 구현하기로했다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;flaot 형식으로 저장하지 않고 문자열로 저장하는 이유는 문자열로 저장하는 것이 용량도 적게 들고 768열로 각각의 float를 저장해서 불러오는 것보다. 문자열로 저장한 값을 numpy에 np.fromstring() 매서드를 사용하는 것이 더 효과적이기 때문이다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;하지만 위 방법에는 몇 가지 문제가 있다.&lt;/b&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;1. 문자열의 길이가 너무 길다.&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp; &amp;nbsp; : BERT는 기본적으로 768차원이고 flaot32를 기준으로 했을 때 평균적으로 &lt;b&gt;8500자~9000자&lt;/b&gt; 정도의 문자열 길이를 가진다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;2. 문자열 길이가 길어서 RDBMS에서 fetch 하는데 오랜 시간이 걸린다.&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp; &amp;nbsp; : 문자열의 길이와 fetch 속도는 비례한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그래서 어쩔 수 없이 flaot32를 float16으로 바꿔서 저장해서 속도 문제를 해결할 수 있었다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;평균 9000자에서 4500자로 줄었기 때문에 속도도 2배로 빨라졌다.&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;flaot16이면 당연히 flaot32보다 정확도가 떨어질 수밖에 없다.&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다른 방법을 찾던 중 vector를 문자열로 저장할 때 숫자를 유지할 필요가 없을 수도 있겠다는 생각을 했고&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;결과적으로 flaot32 형식을 유지하면서 3870자로 저장&lt;/b&gt;할 수 있는 방법을 찾았다.&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot; /&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;해결 방법:&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;(1). vector를 byte 형식으로 변경한다.&amp;nbsp;&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1156&quot; data-origin-height=&quot;142&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/YdIxI/btrFlED5tdg/nvaz7YkbQWq0zkhUnsuQhk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/YdIxI/btrFlED5tdg/nvaz7YkbQWq0zkhUnsuQhk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/YdIxI/btrFlED5tdg/nvaz7YkbQWq0zkhUnsuQhk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FYdIxI%2FbtrFlED5tdg%2Fnvaz7YkbQWq0zkhUnsuQhk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;471&quot; height=&quot;142&quot; data-origin-width=&quot;1156&quot; data-origin-height=&quot;142&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;868&quot; data-origin-height=&quot;154&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dq8896/btrFoBl6cM5/JebMCSaioY6v8AIQK171wK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dq8896/btrFoBl6cM5/JebMCSaioY6v8AIQK171wK/img.png&quot; data-alt=&quot;vector to bytes&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dq8896/btrFoBl6cM5/JebMCSaioY6v8AIQK171wK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fdq8896%2FbtrFoBl6cM5%2FJebMCSaioY6v8AIQK171wK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;422&quot; height=&quot;75&quot; data-origin-width=&quot;868&quot; data-origin-height=&quot;154&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;vector to bytes&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;byte 형식으로 변경하면 위 문자로 변경되는데 위 문자를 그대로 문자열로 변경하여 저장해도 좋을 것 같지만&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;byte를 str로 변경하는 과정에서 '\' 문자가 '\\'로 변경되어 문자열의 길이가 늘어나고&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;'//'로 변경된 상태에서 다시 byte 형식으로 변경하기가 어렵다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;(2). byte형식을 base85 형식으로 변경한다.&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1234&quot; data-origin-height=&quot;146&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bmBstr/btrFmnu1Dt1/JYlVY9GxZPGSoeKyfIfbhK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bmBstr/btrFmnu1Dt1/JYlVY9GxZPGSoeKyfIfbhK/img.png&quot; data-alt=&quot;byte to base85&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bmBstr/btrFmnu1Dt1/JYlVY9GxZPGSoeKyfIfbhK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbmBstr%2FbtrFmnu1Dt1%2FJYlVY9GxZPGSoeKyfIfbhK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;590&quot; height=&quot;146&quot; data-origin-width=&quot;1234&quot; data-origin-height=&quot;146&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;byte to base85&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;base85 형식은 '\'문자를 사용하지 않기 때문에 위 문제가 나타나지 않는다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Base64도 있고 Base85도 있는데 왜 하필 Base85 이냐 하면 단순하게 Base85로 인코딩한 문자열이 더 적기 때문이다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size14&quot;&gt;&lt;i&gt;*아마 Base85가 더 많은 방법으로 표현할 수 있기 때문이 아닌가 싶다.&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;(3). base85 형식을 string 형식으로 변경하여 RDBMS에 저장&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;decode() 함수를 사용하면 바로 문자열 형식으로도 변경이 가능하다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;768차원 flaot32를 Base85 문자열로 변경하면 일정하게 3870자가 나온다.&lt;/p&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot; /&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;Python 코드로 구현&lt;/b&gt;&lt;/h4&gt;
&lt;pre id=&quot;code_1655794267479&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;import base64
import numpy as np

#-------벡터를 문자열로 변환-------------------

#랜덤으로 768차원의 flaot32 벡터 생성
random_v = np.float32(np.random.random(768))

# vector를 byte타입으로 변경
v = random_v.tobytes() 

# byte타입을 base85로 인코딩하고 문자열로 변환
v_tostring = base64.b85encode(v).decode()

#--------문자열을 벡터로 변환-------------------
#base85 문자열을 byte 타입으로 디코딩
v_decode = base64.b85decode(v_tostring)

# random_v 와 동일한 vector로 복원
original_v = np.frombuffer(v_decode,dtype = np.float32)&lt;/code&gt;&lt;/pre&gt;
&lt;hr contenteditable=&quot;false&quot; data-ke-type=&quot;horizontalRule&quot; data-ke-style=&quot;style5&quot; /&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;각각을 RDBMS에 저장&lt;/b&gt;&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;674&quot; data-origin-height=&quot;346&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/PEHqA/btrFkvVeivW/veAMKh36cCOGtIyrWgxVpk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/PEHqA/btrFkvVeivW/veAMKh36cCOGtIyrWgxVpk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/PEHqA/btrFkvVeivW/veAMKh36cCOGtIyrWgxVpk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FPEHqA%2FbtrFkvVeivW%2FveAMKh36cCOGtIyrWgxVpk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;466&quot; height=&quot;239&quot; data-origin-width=&quot;674&quot; data-origin-height=&quot;346&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;기존 방법과 비교&lt;/b&gt;&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1726&quot; data-origin-height=&quot;228&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/CLovf/btrSVo77djs/IDjO5rcKr0rKMzs8QliLc0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/CLovf/btrSVo77djs/IDjO5rcKr0rKMzs8QliLc0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/CLovf/btrSVo77djs/IDjO5rcKr0rKMzs8QliLc0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FCLovf%2FbtrSVo77djs%2FIDjO5rcKr0rKMzs8QliLc0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1726&quot; height=&quot;228&quot; data-origin-width=&quot;1726&quot; data-origin-height=&quot;228&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여기서 이제 문자열을 RDBMS에서 불러와서 디코딩하는데 시간이 걸릴 수 있는데&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;fetchall()을 할경우 불러오는 데이터 수에 따라서 디코딩 속도는 그 만큼 배가 되기 때문에&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;많은 select를 할 경우에는 사용하는 것을 권장하지 않는다.&lt;/p&gt;</description>
      <category>Machine learning/NLP</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/284</guid>
      <comments>https://acdongpgm.tistory.com/284#entry284comment</comments>
      <pubDate>Tue, 21 Jun 2022 16:08:02 +0900</pubDate>
    </item>
    <item>
      <title>[Mac_M1]. tokenizer install error 해결하기</title>
      <link>https://acdongpgm.tistory.com/283</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://towardsdatascience.com/hugging-face-transformers-on-apple-m1-26f0705874d7&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://towardsdatascience.com/hugging-face-transformers-on-apple-m1-26f0705874d7&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1655019742563&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;Install Hugging Face Transformers on Apple M1&quot; data-og-description=&quot;Along with Tensorflow and Tokenizers Package&quot; data-og-host=&quot;towardsdatascience.com&quot; data-og-source-url=&quot;https://towardsdatascience.com/hugging-face-transformers-on-apple-m1-26f0705874d7&quot; data-og-url=&quot;https://towardsdatascience.com/hugging-face-transformers-on-apple-m1-26f0705874d7&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/5o5Ok/hyOJtZuiAZ/3GPKjWlBD9tW7LEYehk2gk/img.jpg?width=1200&amp;amp;height=706&amp;amp;face=0_0_1200_706,https://scrap.kakaocdn.net/dn/oc2ry/hyOJwBULHi/i0zMZfCn1BKI4tSdtvbIAK/img.jpg?width=1400&amp;amp;height=824&amp;amp;face=0_0_1400_824&quot;&gt;&lt;a href=&quot;https://towardsdatascience.com/hugging-face-transformers-on-apple-m1-26f0705874d7&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://towardsdatascience.com/hugging-face-transformers-on-apple-m1-26f0705874d7&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/5o5Ok/hyOJtZuiAZ/3GPKjWlBD9tW7LEYehk2gk/img.jpg?width=1200&amp;amp;height=706&amp;amp;face=0_0_1200_706,https://scrap.kakaocdn.net/dn/oc2ry/hyOJwBULHi/i0zMZfCn1BKI4tSdtvbIAK/img.jpg?width=1400&amp;amp;height=824&amp;amp;face=0_0_1400_824');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;Install Hugging Face Transformers on Apple M1&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;Along with Tensorflow and Tokenizers Package&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;towardsdatascience.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;pre id=&quot;code_1655019764914&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

git clone https://github.com/huggingface/tokenizers

pip install setuptools_rust&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;M1 맥북에서 transformer 모델을 설치할때 마다 발생하는 패키지 설치 에러&lt;/p&gt;</description>
      <category>Error Note</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/283</guid>
      <comments>https://acdongpgm.tistory.com/283#entry283comment</comments>
      <pubDate>Sun, 12 Jun 2022 16:43:37 +0900</pubDate>
    </item>
    <item>
      <title>[Mac_m1]. sentencepiece install error 해결하기</title>
      <link>https://acdongpgm.tistory.com/282</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://github.com/google/sentencepiece/issues/608&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://github.com/google/sentencepiece/issues/608&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1655019876196&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;object&quot; data-og-title=&quot;Add Mac M1 Compatibility &amp;middot; Issue #608 &amp;middot; google/sentencepiece&quot; data-og-description=&quot;Hi, Like the most part of Python librairies, SentencePiece won't install on Mac M1 architecture... &amp;quot;A revolution in data science&amp;quot; they said... what a joke, every data science library ...&quot; data-og-host=&quot;github.com&quot; data-og-source-url=&quot;https://github.com/google/sentencepiece/issues/608&quot; data-og-url=&quot;https://github.com/google/sentencepiece/issues/608&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/bSEkhc/hyOJwWc7YW/TqDdH4p05frIkVe3OfXhB1/img.png?width=1200&amp;amp;height=600&amp;amp;face=0_0_1200_600&quot;&gt;&lt;a href=&quot;https://github.com/google/sentencepiece/issues/608&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://github.com/google/sentencepiece/issues/608&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/bSEkhc/hyOJwWc7YW/TqDdH4p05frIkVe3OfXhB1/img.png?width=1200&amp;amp;height=600&amp;amp;face=0_0_1200_600');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;Add Mac M1 Compatibility &amp;middot; Issue #608 &amp;middot; google/sentencepiece&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;Hi, Like the most part of Python librairies, SentencePiece won't install on Mac M1 architecture... &quot;A revolution in data science&quot; they said... what a joke, every data science library ...&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;github.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignLeft&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;754&quot; data-origin-height=&quot;512&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/ST1tC/btrEt3L4aaY/JdaukxRwjXQ0tT4cuMKhU1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/ST1tC/btrEt3L4aaY/JdaukxRwjXQ0tT4cuMKhU1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/ST1tC/btrEt3L4aaY/JdaukxRwjXQ0tT4cuMKhU1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FST1tC%2FbtrEt3L4aaY%2FJdaukxRwjXQ0tT4cuMKhU1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;754&quot; height=&quot;512&quot; data-origin-width=&quot;754&quot; data-origin-height=&quot;512&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;pre id=&quot;code_1655018638869&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;arch -arm64 brew install cmake

pip install --no-cache-dir sentencepiece&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;맥북 m1 너는 진짜 ....&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;위 터미널에서 위 코드 실행으로 설치가 가능합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;도움을 받아 따봉 눌러드렸습니다 ㅎㅎ&lt;/p&gt;</description>
      <category>Error Note</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/282</guid>
      <comments>https://acdongpgm.tistory.com/282#entry282comment</comments>
      <pubDate>Sun, 12 Jun 2022 16:24:47 +0900</pubDate>
    </item>
    <item>
      <title>[GitHub]</title>
      <link>https://acdongpgm.tistory.com/notice/281</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;GitHub : &lt;a href=&quot;https://github.com/jongmin-oh&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://github.com/jongmin-oh&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1688089977537&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;profile&quot; data-og-title=&quot;jongmin-oh - Overview&quot; data-og-description=&quot;ML Engineer : ChatBot Developer. jongmin-oh has 15 repositories available. Follow their code on GitHub.&quot; data-og-host=&quot;github.com&quot; data-og-source-url=&quot;https://github.com/jongmin-oh&quot; data-og-url=&quot;https://github.com/jongmin-oh&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/bs1OO0/hyS9TUynS4/vHEOXBWvEdIkbhFG1Kxxo1/img.jpg?width=460&amp;amp;height=460&amp;amp;face=0_0_460_460&quot;&gt;&lt;a href=&quot;https://github.com/jongmin-oh&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://github.com/jongmin-oh&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/bs1OO0/hyS9TUynS4/vHEOXBWvEdIkbhFG1Kxxo1/img.jpg?width=460&amp;amp;height=460&amp;amp;face=0_0_460_460');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;jongmin-oh - Overview&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;ML Engineer : ChatBot Developer. jongmin-oh has 15 repositories available. Follow their code on GitHub.&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;github.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;</description>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/notice/281</guid>
      <pubDate>Wed, 8 Jun 2022 15:52:38 +0900</pubDate>
    </item>
    <item>
      <title>[교육자료]. 파이썬으로 행맨(Hangman) 게임 구현하기</title>
      <link>https://acdongpgm.tistory.com/279</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;출처 : &lt;a href=&quot;https://nadocoding.tistory.com/11&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://nadocoding.tistory.com/11&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1652514839574&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;파이썬 행맨 (Hangman) 게임 만들기&quot; data-og-description=&quot;즐거운 코딩 시간입니다 ! 이번 개발 주제는 '행맨' 게임이구요. 행맨 게임은 다들 아시겠지만 아주 유명한 단어 퀴즈 프로그램입니다. 어떤 단어가 주어지면 그 단어의 길이만큼 빈 칸(밑줄) 이 &quot; data-og-host=&quot;nadocoding.tistory.com&quot; data-og-source-url=&quot;https://nadocoding.tistory.com/11&quot; data-og-url=&quot;https://nadocoding.tistory.com/11&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/bZtcsl/hyOoIW0okT/X3wr62cBfvckmB8Shb057K/img.jpg?width=800&amp;amp;height=400&amp;amp;face=0_0_800_400,https://scrap.kakaocdn.net/dn/5fvA3/hyOoLGe81l/KXvZSsLwxPqqBF4KpiluD1/img.jpg?width=800&amp;amp;height=400&amp;amp;face=0_0_800_400,https://scrap.kakaocdn.net/dn/Y9tNU/hyOoKN3hvX/mRzEkpL8b0Bso2MZdC1Nh1/img.png?width=1280&amp;amp;height=720&amp;amp;face=0_0_1280_720&quot;&gt;&lt;a href=&quot;https://nadocoding.tistory.com/11&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://nadocoding.tistory.com/11&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/bZtcsl/hyOoIW0okT/X3wr62cBfvckmB8Shb057K/img.jpg?width=800&amp;amp;height=400&amp;amp;face=0_0_800_400,https://scrap.kakaocdn.net/dn/5fvA3/hyOoLGe81l/KXvZSsLwxPqqBF4KpiluD1/img.jpg?width=800&amp;amp;height=400&amp;amp;face=0_0_800_400,https://scrap.kakaocdn.net/dn/Y9tNU/hyOoKN3hvX/mRzEkpL8b0Bso2MZdC1Nh1/img.png?width=1280&amp;amp;height=720&amp;amp;face=0_0_1280_720');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;파이썬 행맨 (Hangman) 게임 만들기&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;즐거운 코딩 시간입니다 ! 이번 개발 주제는 '행맨' 게임이구요. 행맨 게임은 다들 아시겠지만 아주 유명한 단어 퀴즈 프로그램입니다. 어떤 단어가 주어지면 그 단어의 길이만큼 빈 칸(밑줄) 이&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;nadocoding.tistory.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre id=&quot;code_1652514845416&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;import random&lt;/code&gt;&lt;/pre&gt;
&lt;pre id=&quot;code_1652514855415&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;정답리스트 = ['banana','apple','orange','grape','mango']
누적입력알파벳 = ''
정답 = random.choice(정답리스트)
남은기회 = len(정답) * 2
while(True):
    print(f&quot;남은 기회는 {남은기회}번 입니다.&quot;)
    남은기회 -= 1
    
    if 남은기회 &amp;lt;= 0:
        print(&quot;실패하셨습니다.&quot;)
        break
    
    성공여부 = True
    
    print()
    for 알파벳 in 정답:
        if 알파벳 in 누적입력알파벳: 
            print(알파벳, end=&quot; &quot;)
        else:
            print(&quot;_&quot;, end=&quot; &quot;)
            성공여부 = False
    
    if 성공여부 == True:
        print(&quot;정답을 맞추셨습니다.&quot;)
        break
    
    while(1):
        입력알파벳 = input(&quot;알파벳 하나만 입력해주세요 : &quot;)
        if len(입력알파벳) == 1:
            break
        else:
            print(&quot;두 글자 이상 입력하셨습니다. 다시 입력해주세요.&quot;)
    
    if 입력알파벳 not in 누적입력알파벳:
        누적입력알파벳 += 입력알파벳&lt;/code&gt;&lt;/pre&gt;</description>
      <category>Python</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/279</guid>
      <comments>https://acdongpgm.tistory.com/279#entry279comment</comments>
      <pubDate>Sat, 14 May 2022 16:54:40 +0900</pubDate>
    </item>
    <item>
      <title>[딥러닝]. 효과적인 학습(training) 방법 모음</title>
      <link>https://acdongpgm.tistory.com/278</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;1. FP32 와 FP16을 적절하게 사용하여 리소스를 절약하여 학습속도를 높힘&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Mixed Precision을 이용하여 GPU resource를 효율적으로 사용할 수 있는 방법&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://bo-10000.tistory.com/32&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://bo-10000.tistory.com/32&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1652257084721&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;[딥러닝 논문리뷰] Mixed Precision Training (ICLR 2018)&quot; data-og-description=&quot;NVIDIA와&amp;nbsp;Baidu에서 연구하고 ICLR 2018에 발표된 논문인 Mixed Precision Training을 바탕으로 정리한 글입니다. 딥러닝 학습 과정에서 Mixed Precision을 이용하여 GPU resource를 효율적으로 사용할 수 있는..&quot; data-og-host=&quot;bo-10000.tistory.com&quot; data-og-source-url=&quot;https://bo-10000.tistory.com/32&quot; data-og-url=&quot;https://bo-10000.tistory.com/32&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/fnAzW/hyOmqPNss5/ANkxuX84SZv2HyMWjDIAnk/img.png?width=800&amp;amp;height=641&amp;amp;face=0_0_800_641,https://scrap.kakaocdn.net/dn/ca25UN/hyOmqCgBo0/YL9HOU5Rl4HHttPYCFIfek/img.png?width=800&amp;amp;height=641&amp;amp;face=0_0_800_641,https://scrap.kakaocdn.net/dn/fNrz2/hyOmtFEaIv/e5R3ANZI0QaL7eyQjLGJx1/img.png?width=965&amp;amp;height=775&amp;amp;face=0_0_965_775&quot;&gt;&lt;a href=&quot;https://bo-10000.tistory.com/32&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://bo-10000.tistory.com/32&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/fnAzW/hyOmqPNss5/ANkxuX84SZv2HyMWjDIAnk/img.png?width=800&amp;amp;height=641&amp;amp;face=0_0_800_641,https://scrap.kakaocdn.net/dn/ca25UN/hyOmqCgBo0/YL9HOU5Rl4HHttPYCFIfek/img.png?width=800&amp;amp;height=641&amp;amp;face=0_0_800_641,https://scrap.kakaocdn.net/dn/fNrz2/hyOmtFEaIv/e5R3ANZI0QaL7eyQjLGJx1/img.png?width=965&amp;amp;height=775&amp;amp;face=0_0_965_775');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;[딥러닝 논문리뷰] Mixed Precision Training (ICLR 2018)&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;NVIDIA와&amp;nbsp;Baidu에서 연구하고 ICLR 2018에 발표된 논문인 Mixed Precision Training을 바탕으로 정리한 글입니다. 딥러닝 학습 과정에서 Mixed Precision을 이용하여 GPU resource를 효율적으로 사용할 수 있는..&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;bo-10000.tistory.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;2. 추가 예정&lt;/p&gt;</description>
      <category>Machine learning/Deep Learning</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/278</guid>
      <comments>https://acdongpgm.tistory.com/278#entry278comment</comments>
      <pubDate>Wed, 11 May 2022 17:18:15 +0900</pubDate>
    </item>
    <item>
      <title>[교육자료]. pyautogui 마우스 제어를 통한 윈도우 자동 종료</title>
      <link>https://acdongpgm.tistory.com/277</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;개개인 별로 모니터 해상도에 따라 다를 수 있음&lt;/b&gt;&lt;/p&gt;
&lt;pre id=&quot;code_1651561427150&quot; class=&quot;python&quot; data-ke-language=&quot;python&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;import pyautogui
import time

pyautogui.moveTo(16, 752,1) #시작 아이콘
pyautogui.click()
time.sleep(1)
pyautogui.moveTo(23, 704,1)
pyautogui.click()
pyautogui.moveTo(94, 628,1)
time.sleep(1)
pyautogui.click()
pyautogui.click()&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>Python</category>
      <author>Acdong</author>
      <guid isPermaLink="true">https://acdongpgm.tistory.com/277</guid>
      <comments>https://acdongpgm.tistory.com/277#entry277comment</comments>
      <pubDate>Tue, 3 May 2022 16:04:02 +0900</pubDate>
    </item>
  </channel>
</rss>