Introduction

Cloud computing has become a cornerstone for deploying modern IT solutions, offering scalable, efficient, and versatile options for businesses and individuals alike. With the evolution of cloud technology, three primary deployment models have emerged: public, private, and hybrid clouds. Each model offers distinct features, benefits, and considerations, making it crucial to understand their differences to choose the most suitable option for your needs. In this article, we’ll delve into the nuances of public, private, and hybrid clouds, helping you navigate the cloud landscape.

閱讀全文 »

前言

物件導向(Object-Oriented Programming, OOP)是一種程式設計範式,強調使用包含數據(屬性)和方法(功能)的物件來設計和構建應用程序。提高軟體的重用性、靈活性和擴充性。

而對於物件導向,最基礎需要知道以下:

一個抽象
兩個目的
三個特性
五個原則

閱讀全文 »

Overview

In this article, I will focus more on the computing detail in transformer.
It will cover self-attention, prallel processing, multi-head self-attention, positional encoding and so on.

閱讀全文 »

Overview

Self-attention allows the model to weigh the importance of different parts of an input sequence against each other, capturing relationships and dependencies between elements within the sequence. This is particularly powerful for tasks involving sequential or contextual information, such as language translation, text generation, and more.

What Self-Attention wants to do is to replace what RNN can do
Its output/input is the same as RNN, and its biggest advantages are:

  • Can parallelize operations
  • Each output vector has seen the entire input sequence. So there is no need to stack several layers like CNN.
閱讀全文 »

Overview

The Transformer is a deep learning architecture introduced in the paper “Attention Is All You Need” by Vaswani et al. in 2017. It revolutionized the field of natural language processing (NLP) and brought significant advancements in various sequence-to-sequence tasks. The Transformer architecture, thanks to its attention mechanisms, enables efficient processing of sequential data while capturing long-range dependencies.


Transformer is a Seq2Seq(Sequence to Sequence) model. It uses Encoder-Decoder structure
Below is a simple diagram:


Source: https://ai.googleblog.com/2016/09/a-neural-network-for-machine.html

The line between Encoder and Decoder represents the “Attention”.
The thicker the line, the more the Decoder below pays more attention to some Chinese characters above when generating an English word.

閱讀全文 »

Overview

In this article, I will provide an introduction to adapters and LoRA, including their definitions, purposes, and functions. I will also explore their various applications and, lastly, delve into the distinctions that set them apart(the differences between them).

閱讀全文 »

Abstract & Overview

這篇文章我們來欣賞一下在arXiv上的一篇論文:
A Servey on Multimodal Large Language Models
這篇論文主要在介紹與整理現行代表性的MLLM(Multimodal LLM),並將其分類為4大體裁:
Multimodal Instruction Tuning(M-IT), Multimodal In-Context Learning(M-ICL), Multimodal Chain-of_Thought(M-CoT)以及LLM-Aided Visual Reasoning(LAVR)。
其中,前三者為MLLM的基礎,而最後一個則是以LLM為核心的multimodal system,類似於一個系統框架。

MLLM指的是在LLM的基礎上,從單模態走向多模態,從人工智慧的角度來看,MLLM比LLM向前又跨出了一步,原因如下:

  1. MLLM更符合人的感官世界,自然地接受多感官輸入
  2. MLLM提供一個友好的介面,支持多模態輸入,使其易於與使用者交流
  3. MLLM是一個更全面的問題解決者,雖然LLM可以解決NLP的問題,但MLLM通常可以支持更大範圍的任務

下面我將針對這篇論文中較為重要的部分做簡要的欣賞與分析 ,其他細節部分可以自行閱讀這篇精彩的論文!

閱讀全文 »

Foreword

Here is the series of Git full tutorial. I will cover the important and advanced topics that you should know in git. By reading this series, you are able to know the basic(but important) concept of Git and the technique of using git. Hope you can gain a lot !

閱讀全文 »