注:並行性、閉鎖、マクロは詳細ではカバーされていません。私が間違っているところはどこでも私を修正してください
個人的な使用のためにこのメモを作成しました。このメモが誰かを助けることを願っています。
貨物でプロジェクトを作成します
$ cargo new hello_cargo --bin // ‘--lib’ for library
注:
--vcsフラグを使用して、新しいバージョン制御システムまたはバージョン制御システムを使用して使用するために新しい貨物を変更できます。
TOML (Tomの明らかな、最小言語)形式、これは貨物の構成形式です。
貨物ロック。このファイルは、プロジェクトの依存関係の正確なバージョンを追跡します。
貨物はまた、 cargo checkと呼ばれるコマンドを提供します。このコマンドはすぐにコードをチェックしてコンパイルしますが、実行可能ファイルを作成しないことを確認します
リリース用の建物
プロジェクトが最終的にリリースの準備ができたら、 cargo build --release使用して、最適化でコンパイルします。このコマンドは、ターゲット/デバッグの代わりにターゲット/リリースで実行可能ファイルを作成します。最適化により、錆コードの実行が速くなりますが、それらをオンにすると、プログラムがコンパイルされるまでの時間が長くなります。
これが、2つの異なるプロファイルがある理由です。1つは開発用、1つは迅速かつ頻繁に再構築する場合、もう1つは繰り返し再構築されず、できるだけ早く実行されるユーザーに提供する最終プログラムを構築するためです。コードの実行時間をベンチマークしている場合は、ターゲット/リリースの実行可能ファイルを使用して、 cargo build –releaseとベンチマークを必ず実行してください。
_let mut guess = String::new();
:: ::new行の構文は、新しい行が文字列型の関連関数であることを示します。関連する関数は、文字列の特定のインスタンスではなく、この場合は文字列のタイプに実装されます。一部の言語はこれを静的方法と呼んでいます。
//構文は、行の終わりまで続くコメントを開始します。
クレートを更新して新しいバージョンを取得します
木枠を更新したい場合、貨物は別のコマンドを提供します。これにより、 Cargo.lockファイルを無視し、 Cargo.toml仕様に合ったすべての最新バージョンを把握します。それが機能する場合、貨物はこれらのバージョンをCargo.lockファイルに書き込みます。ただし、デフォルトでは、貨物は0.3.0を超えて0.4.0を超えるバージョンのみを探します。 rand Crateが0.3.15と0.4.0 2つの新しいバージョンをリリースした場合、貨物の更新を実行した場合、次のように表示されます。
貨物のもう1つのきちんとした機能は、
cargo doc --openコマンドを実行できることです。これにより、すべての依存関係がローカルで提供され、ブラウザで開くドキュメントが作成されます。
以下は、現在使用されているキーワードのリストであり、その機能は説明されています。
as原始的な鋳造を実行したり、アイテムを含む特定の特性を曖昧にしたり、 useステートメントのアイテムの名前を変更します
async現在のスレッドをブロックする代わりにFutureを返す
await - Futureの結果が準備されるまで実行を一時停止する
break - すぐにループを終了します
const一定のアイテムまたは一定の生のポインターを定義します
continue - 次のループイテレーションに続きます
crate - モジュールパスでは、クレートルートを指します
dyn特性オブジェクトへの動的ディスパッチ
else if letフロー構築物のifのフォールバック
enum - 列挙を定義します
extern外部関数または変数をリンクします
false -boolean falseリテラル
fn関数または関数ポインタータイプを定義します
for -Iteratorのアイテムをループしたり、特性を実装したり、より高いランクの寿命を指定したりします
if条件付き式の結果に基づくブランチ
impl固有または特性機能を実装します
inループforの一部
変数をバインドしlet
loop - 条件付きでループ
match - 値をパターンに一致させます
modモジュールを定義します
move - すべてのキャプチャの所有権を取得する
mut参照、生のポインター、またはパターンバインディングの可変性を示します
pub構造フィールド、 implブロック、またはモジュールでの公開可視性を示します
ref参照によりバインド
return - 関数から戻ります
Self - 定義または実装しているタイプのタイプエイリアス
selfメソッドの主題または現在のモジュール
static - プログラムの実行全体に続くグローバル変数または生涯
struct - 構造を定義します
現在のモジュールのsuper - 親モジュール
trait - 特性を定義します
true -BOOLEAN TRUEリテラル
type - タイプエイリアスまたは関連タイプを定義します
union - 組合を定義します。組合宣言で使用する場合のキーワードのみです
unsafe - 安全でないコード、関数、特性、または実装を示します
use - シンボルをスコープに持ち込みます
where - タイプを制約する句を示します
while式の結果に基づいて条件付きでループ
次のキーワードにはまだ機能がありませんが、将来の潜在的な使用のために錆によって予約されています。
abstract
become
box
do
final
macro
override
priv
try
typeof
unsized
virtual
yield
生の識別子は、通常許可されないキーワードを使用できる構文です。
r#を使用してキーワードをプレフィックスすることにより、生の識別子を使用します。
定数
まず、定数でmutを使用することは許可されていません。定数は、デフォルトでは常に不変ではありません。 letキーワードの代わりにconstキーワードを使用して定数を宣言し、値のタイプを注釈付けする必要があります。
const THREE_HOURS_IN_SECONDS: u32 = 60 * 60 * 3;
最後の違いは、定数は、実行時にのみ計算できる値の結果ではなく、一定の式にのみ設定できることです。
定数は、プログラムが実行され、宣言された範囲内で全体で有効です。
シャドーイングあなたは、以前の変数と同じ名前の新しい変数を宣言できます。 Rustaceansによると、最初の変数は2番目の変数によって影が付けられていると言います。つまり、2番目の変数は、変数の名前を使用するときにコンパイラが表示するものです。効果では、2番目の変数は最初に覆い隠し、変数名の使用をそれ自体に、それ自体が影にされるか、スコープが終了するまでそれ自体に使用します。同じ変数の名前を使用して、次のようにletキーワードの使用を繰り返すことにより、変数に影を落とすことができます。
fn main() {
let x = 5;
let x = x + 1;
{
let x = x * 2;
println!("The value of x in the inner scope is: {x}");
}
println!("The value of x is: {x}");
}
シャドーイングは、変数をmutとしてマークすることから異なります。これは、 letキーワードを使用せずにこの変数に誤って再割り当てしようとするとコンパイル時間エラーが得られるためです。 letを使用することにより、値に対していくつかの変換を実行できますが、それらの変換が完了した後、変数は不変にします。
mutとShadowingのもう1つの違いは、 letキーワードを再度使用するときに新しい変数を効果的に作成するため、値のタイプを変更して同じ名前を再利用できることです。たとえば、私たちのプログラムがユーザーに、スペース文字を入力してテキスト間で必要なスペースの数を表示するようにユーザーに依頼し、その入力を番号として保存する必要があります。
let spaces = " ";
let spaces = spaces.len();
最初のスペース変数は文字列タイプで、2番目のスペース変数は数字です。したがって、シャドーイングは、 spaces_strやspaces_numなどの異なる名前を思い付かなければならないことから私たちを免れます。代わりに、よりシンプルなスペース名を再利用できます。ただし、ここに示すように、これにmutを使用しようとすると、コンパイル時間エラーが表示されます。
let mut spaces = " "; // Compile-time Error
spaces = spaces.len();
データ型
スカラータイプは単一の値を表します。 Rustには、4つの主要なスカラータイプがあります。整数、フローティングポイント数、ブール膜、および文字です。
| 長さ | 署名 | 署名なし |
|---|---|---|
| 8ビット | i8 | U8 |
| 16ビット | I16 | U16 |
| 32ビット | I32 | U32 |
| 64ビット | i64 | U64 |
| 128ビット | I128 | U128 |
| アーチ | Isize | 使います |
各署名されたバリアントは-(2n - 1)から2n - 1 - 1の数値を保存できます。ここで、 nバリアントが使用するビットの数です。
注:複数の数値タイプである可能性のある数値リテラルにより、
57u8などのタイプサウXがタイプを指定できます。数字リテラルは、視覚分離器として_を使用して、1_000などの数値1000読みやすくすることもできます。
| 数字リテラル | 例 |
|---|---|
| 小数 | 98_222 |
| ヘックス | 0xff |
| オクタル | 0o77 |
| バイナリ | 0b1111_0000 |
| バイト(u8のみ) | b'A' |
整数オーバーフロー
0〜255の値を保持できるタイプu8の変数があるとしましょう。256などの範囲外の値に変数を変更しようとすると、整数のオーバーフローが発生し、2つの動作の1つになります。デバッグモードでコンパイルするとき、Rustには、この動作が発生した場合に実行時にプログラムをパニックにする整数オーバーフローのチェックが含まれます。 Rustは、プログラムがエラーで終了するとパニックに陥るという用語を使用します。
--releaseフラグを使用してリリースモードでコンパイルする場合、Rustにはパニックを引き起こす整数過剰流のチェックは含まれません。代わりに、オーバーフローが発生した場合、Rustは2つの補数ラップを実行します。要するに、タイプが保持できる最大値よりも大きい値は、型が保持できる値の最小値に保持できます。 u8の場合、値256は0 、値257は1などになります。プログラムはパニックになりませんが、変数にはおそらくあなたが期待していたものではない値があります。フローオーバーフローのラッピング動作に依存することは、エラーと見なされます。
オーバーフローの可能性を明示的に処理するために、標準ライブラリが提供するプリミティブ数値タイプに提供される方法のこれらのファミリーを使用できます。
• wrapping_addなど、すべてのモードをwrapping_*メソッドでラップします
• checked_*メソッドでオーバーフローがある場合は、なしの値を返します
• overflowing_*メソッド
• saturating_*メソッドで値の最小値または最大値で飽和します
フローティングポイントタイプ
Rustには、流動点数の2つの原始的なタイプもあります。これは、小数点を持つ数字です。 Rustのフローティングポイントタイプはf32およびf64で、それぞれ32 bitsと64 bitsです。デフォルトのタイプはf64です。なぜなら、最新のCPUではf32とほぼ同じ速度であるが、より精度が得られるためです。すべてのフローティングポイントタイプが署名されています。
浮動小数点数は、IEEE-754標準に従って表されます。 f32タイプは単一の精度のフロートであり、f64には二重の精度があります。
数値操作
Rustは、追加、減算、乗算、除算、および残りのすべてのタイプに期待される基本的な数学操作をサポートします。整数部門は最寄りの整数まで丸めます。
fn main() {
let sum = 5 + 10; // addition
let difference = 95.5 - 4.3; // subtraction
let product = 4 * 30; // multiplication
let quotient = 56.7 / 32.2; // division
let floored = 2 / 3; // Results in 0
let remainder = 43 % 5; // remainder
}
ブールタイプ
他のほとんどのプログラミング言語と同様に、Rustのブール型タイプには、 trueとfalse 2つの可能な値があります。ブリアンのサイズは1つのバイトです。錆のブールタイプは、 boolを使用して指定されています。
文字タイプ
RustのCharタイプは、言語で最も原始的なアルファベット型です。 char値を宣言する例は次のとおりです。
fn main() {
let c = 'z';
let z: char = 'ℤ'; // with explicit type annotation
}
二重引用符を使用する
stringリテラルとは対照的に、単一の引用符でcharリテラルを指定することに注意してください。 RustのCharタイプのサイズは4バイトで、ユニコードスカラー値を表します。つまり、 ASCII以上のものを表すことができます。アクセントされた文字;中国語、日本、韓国のキャラクター。絵文字;ゼロ幅のスペースはすべて、錆の有効なchar値です。 Unicodeスカラー値は、U+0000からU+D7FF、U+E000からU+10FFFFを含む範囲です。
化合物タイプ
化合物タイプは、複数の値を1つのタイプにグループ化できます。 Rustには、 tuplesとarrays 2つの原始化合物タイプがあります。
タプルタイプ
タプルは、さまざまなタイプを持つ多くの値を1つの複合タイプにグループ化する一般的な方法です。タプルには固定された長さがあります。宣言されると、サイズが大きくなったり縮小したりすることはできません。括弧内の値のコンマ分離されたリストを書くことにより、タプルを作成します。タプルの各位置にはタイプがあり、タプルの異なる値のタイプは同じである必要はありません。
fn main() {
let tup: (i32, f64, u8) = (500, 6.4, 1);
}
タプルは単一の化合物要素と見なされるため、可変tupタプル全体に結合します。タプルから個々の値を取得するために、パターンマッチングを使用して、このようなタプル値を破壊することができます。
fn main() {
let tup = (500, 6.4, 1);
let (x, y, z) = tup;
println!("The value of y is: {y}");
}
このプログラムは最初にタプルを作成し、可変tupに結合します。次に、パターンを使用して、 tupをletして3つの別々の変数、 x 、 y 、 zに変換します。これは破壊と呼ばれます。これは、単一のタプルを3つの部分に分割するためです。最後に、プログラムはyの値を印刷します。これは6.4です。
また、ピリオド( . )を使用して、アクセスする値のインデックスを使用して、タプル要素に直接アクセスすることもできます。例えば:
fn main() {
let x: (i32, f64, u8) = (500, 6.4, 1);
let five_hundred = x.0;
let six_point_four = x.1;
let one = x.2;
}
このプログラムは、Tuple xを作成し、それぞれのインデックスを使用してタプルの各要素にアクセスします。ほとんどのプログラミング言語と同様に、タプルの最初のインデックスは0です。値のないタプルには、特別な名前、 unitがあります。この値とその対応するタイプは両方とも()れており、空の値または空の返品タイプを表します。式は、他の値を返さない場合、ユニット値を暗黙的に返します。
配列タイプ
複数の値のコレクションを使用する別の方法は、配列を使用することです。タプルとは異なり、配列のすべての要素は同じタイプを持っている必要があります。他の言語のアレイとは異なり、錆の配列には固定された長さがあります。四角い括弧内のコンマ分離リストとして、アレイ内の値を書き込みます。
fn main() {
let a = [1, 2, 3, 4, 5];
}
配列は、ヒープではなくスタックにデータを割り当てたい場合、または常に固定された数の要素を確保する場合に役立ちます。ただし、アレイはベクトルタイプほど柔軟ではありません。ベクトルは、サイズが成長または縮小することが許可されている標準ライブラリによって提供される同様のコレクションタイプです。
各要素のタイプ、セミコロン、次にアレイ内の要素の数を使用して、四角いブラケットを使用して配列のタイプを書きます。
let a: [i32; 5] = [1, 2, 3, 4, 5];
また、ここに示すように、初期値を指定し、次に四角い括弧内のアレイの長さを指定することにより、各要素の同じ値を含む配列を初期化することもできます。
let a = [3; 5];
名前付きの配列には、最初に値3に設定される5要素が含まれています。これはa = [3, 3, 3, 3, 3];しかし、より簡潔な方法で。
配列要素へのアクセス
配列は、スタックに割り当てることができる既知の固定されたサイズの記憶の単一の塊です。このようなインデックス作成を使用して、配列の要素にアクセスできます。
fn main() {
let a = [1, 2, 3, 4, 5];
let first = a[0];
let second = a[1];
}
Rust Codeは、機能名と変数名の従来のスタイルとしてSnake Caseを使用します。すべての文字は小文字であり、個別の単語を強調しています。
声明と表現
関数本体は、オプションで式で終わる一連のステートメントで構成されています。これまでのところ、私たちが取り上げてきた機能には、エンディング式は含まれていませんが、声明の一部として式が見られました。錆は表現ベースの言語であるため、これは理解することが重要な区別です。他の言語には同じ違いがないので、声明と表現が何であるか、そしてそれらの違いが機能の体にどのように影響するかを見てみましょう。
ステートメントは、何らかのアクションを実行し、値を返さない指示です。式は、結果として得られる値に評価されます。いくつかの例を見てみましょう。
実際には、すでにステートメントと表現を使用しています。変数を作成し、 letキーワードを使用して値を割り当てることはステートメントです。
fn main() {
let y = 6;
}
式は値に評価し、錆びて書くコードの残りの部分のほとんどを構成します。 5 + 6などの数学操作を考えてみましょう。これは、値11に評価される式です。式は声明の一部であり、声明の6 let y = 6;値6に評価する式です。関数を呼び出すことは式です。マクロを呼ぶことは表現です。巻き毛ブラケットで作成された新しいスコープブロックは、たとえば次のような表現です。
fn main() {
let y = {
let x = 3;
x + 1
};
println!("The value of y is: {y}");
}
この表現:
{
let x = 3;
x + 1
}
この場合、 4に評価するブロックです。その値は、 letステートメントの一部としてyにバインドされます。これまでに見たほとんどのラインとは異なり、 x + 1ラインには最後にセミコロンがないことに注意してください。式には、エンディングセミコロンは含まれません。式の最後にセミコロンを追加すると、それを声明に変換すると、値が返されません。次に関数の返品値と式を探索するとき、これを覚えておいてください。
戻り値の関数
関数は、それらを呼び出すコードに値を返すことができます。返品値に名前を付けませんが、 ->の後にタイプを宣言する必要があります。さびにおいて、関数の戻り値は、関数の本体のブロックにおける最終式の値と同義です。 returnキーワードを使用して値を指定することで関数から早期に戻ることができますが、ほとんどの関数は最後の式を暗黙的に返します。値を返す関数の例は次のとおりです。
fn five() -> i32 {
5
}
fn main() {
let x = five();
println!("The value of x is: {x}");
}
式の場合
AN式if 、条件に応じてコードを分岐できます。条件を提供してから、「この条件が満たされている場合は、このコードブロックを実行します。条件が満たされていない場合は、このコードブロックを実行しないでください。」式の条件に関連付けられたコードのブロックは、式のアームのように、式が腕と呼ばれることもあります。
fn main() {
let number = 3;
if number < 5 {
println!("condition was true");
} else {
println!("condition was false");
}
}
他の場合に複数の条件を処理します
IFおよびその他の式で式で結合することにより、複数の条件を使用できます。例えば:
fn main() {
let number = 6;
if number % 4 == 0 {
println!("number is divisible by 4");
} else if number % 3 == 0 {
println!("number is divisible by 3");
} else if number % 2 == 0 {
println!("number is divisible by 2");
} else {
println!("number is not divisible by 4, 3, or 2");
}
}
このプログラムが実行されると、式が順番に式で順番にチェックし、条件が当てはまる最初のボディを実行します。 6は2で割り切れることができますが、出力数が2で割り切れないこともわかりません。また、その数は、elseブロックから4、3、または2のテキストで分割できないこともわかりません。それは、錆が最初の真の条件のブロックのみを実行するだけであり、一度それが見つかると、残りをチェックすることさえしません。
letステートメントでの場合
式の場合、LETステートメントの右側でそれを使用して、結果を変数に割り当てることができるため、
fn main() {
let condition = true;
let number = if condition { 5 } else { 6 };
println!("The value of number is: {number}");
}
Rustには3種類のループがあります:
loop、while、for。
loopキーワードは、Rustに、コードのブロックを何度も何度も実行するか、明示的に停止するように指示するまで指示します。
ループから値を返す
ループの使用の1つは、スレッドでジョブを完了したかどうかを確認するなど、失敗する可能性があると知っている操作を再試行することです。また、その操作の結果をループから残りのコードに渡す必要があるかもしれません。これを行うには、ループを停止するために使用するブレーク式の後に返される値を追加できます。この値はループから返されるので、ここに示すように使用できます。
fn main() {
let mut counter = 0;
let result = loop {
counter += 1;
if counter == 10 {
break counter * 2;
}
};
println!("The result is {result}");
}
複数のループ間で明確にするためのループラベル
ループ内にループがある場合は、その時点で最も内側のループに違反して適用し続けます。オプションで、ループでループラベルを指定することができます。これにより、ブレークで使用できるか、これらのキーワードが最も内側のループの代わりにラベル付きループに適用されることを指定し続けることができます。ループラベルは、単一の引用から開始する必要があります。 2つのネストされたループを備えた例は次のとおりです。
fn main() {
let mut count = 0;
'counting_up: loop {
println!("count = {count}");
let mut remaining = 10;
loop {
println!("remaining = {remaining}");
if remaining == 9 {
break;
}
if count == 2 {
break 'counting_up;
}
remaining -= 1;
}
count += 1;
}
println!("End count = {count}");
}
外側のループにはラベル'counting_upがあり、 0から2にカウントされます。ラベルのない内部ループは、 10から9にカウントされます。ラベルを指定しない最初のブレークは、内側ループのみを終了します。 break 'counting_up;ステートメントは外側のループを終了します。
のために
これは、カウントダウンがforループを使用しているように見えるものと、範囲を逆転させるためにまだ話していない別の方法です。
fn main() {
for number in (1..4).rev() {
println!("{number}!");
}
println!("LIFTOFF!!!");
}
所有権は、Rustプログラムがメモリを管理する方法を管理する一連のルールです。すべてのプログラムは、実行中にコンピューターのメモリを使用する方法を管理する必要があります。一部の言語には、プログラムが実行されると定期的に使用されていないメモリを定期的に検索するゴミコレクションがあります。他の言語では、プログラマーはメモリを明示的に割り当てて解放する必要があります。 Rustは3番目のアプローチを使用します。メモリは、コンパイラがチェックする一連のルールを使用して、所有権システムを通じて管理されます。ルールのいずれかに違反されている場合、プログラムはコンパイルされません。所有権の機能はどれも、プログラムが実行中に遅くなることはありません。
所有権ルール
まず、所有権のルールを見てみましょう。これらのルールを念頭に置いて、それらを説明する例を扱っているときに留意してください。
•Rustの各値には所有者がいます。
•一度に1人の所有者しかいない。
•所有者が範囲外に出ると、値は低下します。
メモリと割り当て
文字通りの文字列の場合、コンパイル時の内容がわかっているため、テキストは最終実行可能ファイルに直接強化されています。これが、文字列リテラルが高速で効率的である理由です。しかし、これらのプロパティは、文字列の文字通りの不変性からのみ発生します。残念ながら、コンパイル時にサイズが不明で、プログラムの実行中にサイズが変更される可能性のあるテキストごとに、メモリのブロブをバイナリに挿入することはできません。
文字列タイプを使用すると、変速可能で成長可能なテキストをサポートするために、コンテンツを保持するために、コンパイル時間に不明なヒープに量のメモリを割り当てる必要があります。これはつまり:
•メモリは、実行時にメモリアallocatorから要求する必要があります。
• Stringが完了したら、このメモリをアallocatorに返す方法が必要です。
その最初の部分は私たちによって行われます: String::fromを呼び出すと、その実装は必要なメモリを要求します。これは、プログラミング言語ではほとんど普遍的です。
ただし、2番目の部分は異なります。ゴミコレクター(GC)を持つ言語では、GCは使用されていないメモリを追跡し、クリーンアップします。それについて考える必要はありません。 GCのないほとんどの言語では、メモリが使用されなくなった時期を特定し、要求するように明示的に解放するためにコードを呼び出すことは私たちの責任です。これを正しく行うことは、歴史的に困難なプログラミングの問題でした。忘れてしまうと、記憶を無駄にします。早すぎると、無効な変数があります。私たちが二度それをするなら、それもバグです。ちょうど1つの割り当てを1つのフリーとペアリングする必要があります。
さびは異なるパスを取ります:所有する変数が範囲外に出ると、メモリは自動的に返されます。
{
let s = String::from("hello");// s is valid from this point //forward
// do stuff with s
} // this scope is now over, and s is no
// longer valid
s列が必要とするメモリをallocator:変数が範囲外に出ると、Rustは私たちのために特別な機能を呼び出します。この関数はdropと呼ばれ、 Stringの著者がコードを配置してメモリを返すことができます。錆びた呼び出しは、閉じたカーリーブラケットで自動的にdrop 。
変数とデータの対話方法:クローン
スタックデータだけでなく、文字列のヒープデータを深くコピーしたい場合は、 cloneと呼ばれる一般的な方法を使用できます。
let s1 = String::from("hello");
let s2 = s1.clone();
println!("s1 = {}, s2 = {}", s1, s2);
スタックのみのデータ:コピー
let x = 5;
let y = x;
println!("x = {}, y = {}", x, y);
しかし、このコードは私たちが学んだことと矛盾しているようです。 cloneを呼びかけることはありませんが、 xまだ有効であり、 yに移動しませんでした。
その理由は、コンパイル時間に既知のサイズの整数などのタイプがスタックに完全に保存されるため、実際の値のコピーがすぐに作成できるためです。つまり、変数yを作成した後にxが有効になるのを防ぎたい理由はないということです。言い換えれば、ここでは深いコピーと浅いコピーの間に違いはないため、クローンを呼び出すことは通常の浅いコピーから何もしないので、除外することができます。
では、どのタイプがCopy特性を実装していますか?特定のタイプのドキュメントを確認することはできますが、一般的なルールとして、単純なスカラー値のグループはCopyを実装でき、割り当てを必要とするものや何らかの形のリソースがCopyを実装できません。コピーを実装するタイプの一部は次のとおりです。
• u32などのすべての整数タイプ。 •ブールタイプ、 bool 、値はtrueとfalse 。 • f64などのすべてのフローティングポイントタイプ。 •文字タイプ、 char 。 •タプル。 Copyも実装するタイプのみが含まれている場合。たとえば、( i32 、 i32 )はCopyを実装しますが、( i32 、 String )は実装していません。
変数の所有権は、毎回同じパターンに従います。別の変数に値を割り当てると、それを動かします。ヒープ上のデータを含む変数が範囲外に出ると、データの所有権が別の変数に移動されない限り、値はドロップごとにクリーンアップされます。
参照と借入
参照は、そのアドレスに保存されているデータにアクセスするために従うことができるアドレスであるという点でポインターのようなものです。そのデータは、他の変数が所有しています。ポインターとは異なり、リファレンスは、その参照の存続期間における特定のタイプの有効な値を指すことが保証されています。
fn main() {
let s1 = String::from("hello");
let len = calculate_length(&s1);
println!("The length of '{}' is {}.", s1, len);
}
fn calculate_length(s: &String) -> usize {
s.len()
}
まず、変数宣言と関数返品値のすべてのタプルコードがなくなっていることに注意してください。第二に、 &s1 calculate_lengthに渡し、その定義ではStringではなく&Stringを取得することに注意してください。これらのアンパサンドは参照を表しており、所有権を取得せずに何らかの価値を参照することができます。
注: redereferenceオペレーター *で達成される、使用
&ISの参照の反対です*。
参照借入を作成するアクションを呼び出します。実生活のように、人が何かを所有している場合、あなたは彼らからそれを借りることができます。あなたが終わったら、あなたはそれを返さなければなりません。あなたはそれを所有していません。
それで、私たちが借りているものを変更しようとするとどうなりますか
// !! Compile time Error !!
fn main() {
let s = String::from("hello");
change(&s);
}
fn change(some_string: &String) {
some_string.push_str(", world");
}
変数がデフォルトで不変であるように、参照も同様です。参照があるものを変更することは許可されていません。
可変参照
可変参照には1つの大きな制限があります。値への可変参照がある場合、その値への他の参照を持つことはできません。 sへの2つの可変参照を作成しようとするこのコードは失敗します。
let mut s = String::from("hello");
let r1 = &mut s;
let r2 = &mut s;
println!("{}, {}", r1, r2);
同じデータへの複数の可変参照を同時に防止する制限は、非常に制御された方法で突然変異を可能にします。ほとんどの言語ではいつでも変異することができるため、新しいRustaceansが苦労していることです。この制限を受けることの利点は、錆がコンパイル時にデータレースを防ぐことができるということです。データレースは人種条件に似ており、これらの3つの動作が発生したときに起こります。
•2つ以上のポインターが同じデータに同時にアクセスします。 •データへの書き込みには、少なくとも1つのポインターが使用されています。 •データへのアクセスを同期するために使用されるメカニズムはありません。
いつものように、Curly Bracketsを使用して新しいスコープを作成し、複数の可変参照を可能にします。
let mut s = String::from("hello");
{
let r1 = &mut s;
} // r1 goes out of scope here, so we can make a new reference with no problems.
let r2 = &mut s;
Rustは、可変性と不変の参照を組み合わせるための同様のルールを強制します。このコードはエラーをもたらします。
let mut s = String::from("hello");
let r1 = &s; // no problem
let r2 = &s; // no problem
let r3 = &mut s; // BIG PROBLEM
println!("{}, {}, and {}", r1, r2, r3);
なんて!また、同じ値に不変のものを持っている間、可変参照を持つことはできません。
不変のリファレンスのユーザーは、その価値が突然下から変わるとは期待していません!ただし、データを読んでいるだけの人が他の人のデータを読んでいることに影響を与える能力がないため、複数の不変の参照が許可されています。
参照のスコープは、導入された場所から始まり、その参照が最後に使用される時点で継続することに注意してください。たとえば、このコードは、不変の参照であるprintln! 、可変参照が導入される前に発生します。
let mut s = String::from("hello");
let r1 = &s; // no problem
let r2 = &s; // no problem
println!("{} and {}", r1, r2);
// variables r1 and r2 will not be used after this point
let r3 = &mut s; // no problem
println!("{}", r3);
不変の参照r1とr2のスコープは、 println!それらが最後に使用される場所。これは、可変リファレンスr3が作成される前です。これらのスコープは重複しないため、このコードは許可されています。スコープの終了前に参照が使用されなくなったことをコンパイラの能力は、非屈性寿命と呼ばれます(略してNLL)、
文字列スライス
文字列スライスは、文字列の一部への参照であり、次のように見えます。
let s = String::from("hello world");
let hello = &s[0..5];
let world = &s[6..11];
String全体への参照ではなく、 hello 、追加の[0..5]ビットで指定されたStringの一部への参照です。 [starting_index..ending_index]を指定することにより、ブラケット内の範囲を使用してスライスを作成します。ここで、 Starting_indexはスライスの最初の位置であり、 Ending_indexはスライスの最後の位置よりも1つ以上です。内部的には、スライスデータ構造には、 ending_indexマイナスstarting_indexに対応するスライスの開始位置と長さを保存します。したがって、 let world = &s[6..11]; 、世界は、長さ5のs 6でバイトへのポインターを含むスライスです。
Rust's .. Range構文を使用すると、Index Zeroから開始したい場合は、2つの期間の前に値をドロップできます。言い換えれば、これらは等しい:
let s = String::from("hello");
let slice = &s[0..2];
let slice = &s[..2];
同様に、スライスに文字列の最後のバイトが含まれている場合、トレーリング番号をドロップできます。つまり、これらは等しいことを意味します。
let s = String::from("hello");
let len = s.len();
let slice = &s[3..len];
let slice = &s[3..];
両方の値をドロップして、文字列全体のスライスを取得することもできます。したがって、これらは等しい:
let s = String::from("hello");
let len = s.len();
let slice = &s[0..len];
let slice = &s[..];
構造の定義とインスタンス化
fn build_user(email: String, username: String) -> User {
User {
email: email,
username: username,
active: true,
sign_in_count: 1,
}
}
構造フィールドと同じ名前の関数パラメーターに名前を付けることは理にかなっていますが、電子メールとユーザー名フィールドの名前と変数を繰り返す必要があるのは少し面倒です。構造体により多くのフィールドがある場合、各名前を繰り返すとさらに迷惑になります。
フィールドイニシアントショートサンドを使用します
パラメーター名と構造フィールド名は最後の例ではまったく同じであるため、フィールドinit shorthand syntaxを使用してbuild_user書き換えて、まったく同じ動作をしますが、電子メールとユーザー名の繰り返しがありません。
fn build_user(email: String, username: String) -> User {
User {
email,
username,
active: true,
sign_in_count: 1,
}
}
ここでは、 User構造の新しいインスタンスを作成します。これには、 emailというフィールドがあります。 build_user関数の電子メールパラメーターの電子メールフィールドの値を値に設定します。電子メールフィールドと電子メールパラメーターの名前は同じであるため、 email: emailではなくemailを書く必要があります。
構造更新構文を使用して、他のインスタンスからインスタンスを作成します
多くの場合、他のインスタンスからのほとんどの値を含むが、一部を変更する構造体の新しいインスタンスを作成することは便利です。構造更新構文を使用してこれを行うことができます。
fn main() {
// --snip--
let user2 = User {
active: user1.active,
username: user1.username,
email: String::from("[email protected]"),
sign_in_count: user1.sign_in_count,
};
}
Struct Update Syntaxを使用して、最後の例に示すように、コードを少なくすると同じ効果を実現できます。構文..は、明示的に設定されていない残りの分野が、指定されたインスタンスのフィールドと同じ値を持つ必要があることを指定します。
fn main() {
// --snip--
let user2 = User {
email: String::from("[email protected]"),
..user1
};
}
最後の例のコードは、 user2にemailの値が異なるが、 user1からのusername 、 active 、およびsign_in_countフィールドに同じ値を持つインスタンスも作成します。 ..user1は、残りのフィールドがuser1の対応するフィールドから値を取得する必要があることを指定するために最後に来なければなりませんが、構造体の定義のフィールドの順序に関係なく、任意の順序で必要な順序で値を指定することを選択できます。
注: Struct Update Syntaxは=のようにassionmentを使用しています。これは、「変数とデータの対話方法:移動」セクションで見たように、データを移動するためです。この例では、
user1のユーザー名フィールドの文字列がuser2に移動されたため、user2を作成した後にuser1を使用できなくなりました。usernameの両方にuser2新しい文字列値を指定し、したがってuser1のアクティブおよびsign_in_count値を使用した場合、user2を作成した後もuser1が有効です。activeおよびsign_in_countのタイプは、Copy特性を実装するタイプであるため、「Stackのみのデータ:コピー」セクションで説明した動作が適用されます。
名前付きフィールドのないタプル構造体を使用して、異なるタイプを作成する
Rustは、タプル構造体と呼ばれるタプルに似た構造体もサポートします。タプル構造体には、struct名が提供する意味がありますが、フィールドに関連付けられた名前はありません。むしろ、彼らはフィールドのタイプを持っています。タプル構造体は、タプル全体に名前を付けて、他のタプルからタプルを異なるタイプにしたい場合、および通常の構造体のように各フィールドを冗長または冗長にする場合に役立ちます。
struct Color(i32, i32, i32);
struct Point(i32, i32, i32);
fn main() {
let black = Color(0, 0, 0);
let origin = Point(0, 0, 0);
}
フィールドのないユニットのような構造体
フィールドを持たない構造体を定義することもできます!これらは、「タプルタイプ」セクションで言及したユニットタイプである()と同様に動作するため、ユニットのような構造体と呼ばれます。ユニットのような構造体は、何らかのタイプに特性を実装する必要があるが、タイプ自体に保存するデータがない場合に役立ちます。
struct AlwaysEqual;
fn main() {
let subject = AlwaysEqual;
}
メソッド構文
メソッドは関数に似ています。FN fnと名前でそれらを宣言し、パラメーターと返品値を持つことができ、他の場所からメソッドが呼び出されたときに実行されるコードが含まれています。 Unlike functions, methods are defined within the context of a struct (or an enum or a trait object), and their first parameter is always self , which represents the instance of the struct the method is being called on.
struct Rectangle {
width: u32,
height: u32,
}
impl Rectangle {
fn area(&self) -> u32 {
self.width * self.height
}
}
fn main() {
let rect1 = Rectangle {
width: 30,
height: 50,
};
println!(
"The area of the rectangle is {} square pixels.",rect1.area()
);
}
To define the function within the context of Rectangle , we start an impl (implementation) block for Rectangle . Everything within this impl block will be associated with the Rectangle type. Then we move the area function within the impl curly brackets and change the first (and in this case, only) parameter to be self in the signature and everywhere within the body. In main , where we called the area function and passed rect1 as an argument, we can instead use method syntax to call the area method on our Rectangle instance. The method syntax goes after an instance: we add a dot followed by the method name, parentheses, and any arguments.
In the signature for area , we use &self instead of rectangle: &Rectangle . The &self is actually short for self: &Self . Within an impl block, the type Self is an alias for the type that the impl block is for. Methods must have a parameter named self of type Self for their first parameter, so Rust lets you abbreviate this with only the name self in the first parameter spot. Note that we still need to use the & in front of the self shorthand to indicate this method borrows the Self instance, just as we did in rectangle: &Rectangle . Methods can take ownership of self , borrow self immutably as we've done here, or borrow self mutably, just as they can any other parameter.
We've chosen &self here for the same reason we used &Rectangle in the function version: we don't want to take ownership, and we just want to read the data in the struct, not write to it. If we wanted to change the instance that we've called the method on as part of what the method does, we'd use &mut self as the first parameter. Having a method that takes ownership of the instance by using just self as the first parameter is rare; this technique is usually used when the method transforms self into something else and you want to prevent the caller from using the original instance after the transformation.
Note that we can choose to give a method the same name as one of the struct's fields. For example, we can define a method on Rectangle also named width :
impl Rectangle {
fn width(&self) -> bool {
self.width > 0
}
}
Often, but not always, when we give methods with the same name as a field we want it to only return the value in the field and do nothing else. Methods like this are called getters , and Rust does not implement them automatically for struct fields as some other languages do. Getters are useful because you can make the field private but the method public and thus enable read-only access to that field as part of the type's public API.
Associated Functions
All functions defined within an impl block are called associated functions because they're associated with the type named after the impl . We can define associated functions that don't have self as their first parameter (and thus are not methods) because they don't need an instance of the type to work with. We've already used one function like this: the String::from function that's defined on the String type.
Associated functions that aren't methods are often used for constructors that will return a new instance of the struct. These are often called new , but new isn't a special name and isn't built into the language. For example, we could choose to provide an associated function named square that would have one dimension parameter and use that as both width and height, thus making it easier to create a square Rectangle rather than having to specify the same value twice:
impl Rectangle {
fn square(size: u32) -> Self {
Self {
width: size,
height: size,
}
}
}
The Self keywords in the return type and in the body of the function are aliases for the type that appears after the impl keyword, which in this case is Rectangle .
To call this associated function, we use the :: syntax with the struct name; let sq = Rectangle::square(3); is an example. This function is namespaced by the struct: the :: syntax is used for both associated functions and namespaces created by modules.
Multiple impl Blocks
Each struct is allowed to have multiple impl blocks.
impl Rectangle {
fn area(&self) -> u32 {
self.width * self.height
}
}
impl Rectangle {
fn can_hold(&self, other: &Rectangle) -> bool {
self.width > other.width && self.height > other.height
}
}
Enums and Pattern Matching
Enums allow you to define a type by enumerating its possible variants.
enum IpAddrKind {
V4,
V6,
}
IpAddrKind is now a custom data type that we can use elsewhere in our code.
Enum Values We can create instances of each of the two variants of IpAddrKind like this:
let four = IpAddrKind::V4;
let six = IpAddrKind::V6;
Note that the variants of the enum are namespaced under its identifier, and we use a double colon to separate the two. This is useful because now both values IpAddrKind::V4 and IpAddrKind::V6 are of the same type: IpAddrKind . We can then, for instance, define a function that takes any IpAddrKind :
fn route(ip_kind: IpAddrKind) {}
And we can call this function with either variant:
route(IpAddrKind::V4);
route(IpAddrKind::V6);
However, representing the same concept using just an enum is more concise: rather than an enum inside a struct, we can put data directly into each enum variant. This new definition of the IpAddr enum says that both V4 and V6 variants will have associated String values:
enum IpAddr {
V4(String),
V6(String),
}
let home = IpAddr::V4(String::from("127.0.0.1"));
let loopback = IpAddr::V6(String::from("::1"));
There's another advantage to using an enum rather than a struct : each variant can have different types and amounts of associated data. Version four type IP addresses will always have four numeric components that will have values between 0 and 255 . If we wanted to store V4 addresses as four u8 values but still express V6 addresses as one String value, we wouldn't be able to with a struct. Enums handle this case with ease:
enum IpAddr {
V4(u8, u8, u8, u8),
V6(String),
}
let home = IpAddr::V4(127, 0, 0, 1);
let loopback = IpAddr::V6(String::from("::1"));
struct Ipv4Addr {
// --snip--
}
struct Ipv6Addr {
// --snip--
}
enum IpAddr {
V4(Ipv4Addr),
V6(Ipv6Addr),
}
This code illustrates that you can put any kind of data inside an enum variant: strings, numeric types, or structs, for example. You can even include another enum! Also, standard library types are often not much more complicated than what you might come up with.
enum Message {
Quit,
Move { x: i32, y: i32 },
Write(String),
ChangeColor(i32, i32, i32),
}
Defining an enum with variants such as the ones in last example is similar to defining different kinds of struct definitions, except the enum doesn't use the struct keyword and all the variants are grouped together under the Message type. The following structs could hold the same data that the preceding enum variants hold:
struct QuitMessage; // unit struct
struct MoveMessage {
x: i32,
y: i32,
}
struct WriteMessage(String); // tuple struct
struct ChangeColorMessage(i32, i32, i32); // tuple struct
But if we used the different structs, which each have their own type, we couldn't as easily define a function to take any of these kinds of messages as we could with the Message enum
There is one more similarity between enums and structs: just as we're able to define methods on structs using impl , we're also able to define methods on enums. Here's a method named call that we could define on our Message enum:
impl Message {
fn call(&self) {
// method body would be defined here
}
}
let m = Message::Write(String::from("hello"));
m.call();
The Option Enum and Its Advantages Over Null Values
This section explores a case study of Option , which is another enum defined by the standard library. The Option type encodes the very common scenario in which a value could be something or it could be nothing.
Programming language design is often thought of in terms of which features you include, but the features you exclude are important too. Rust doesn't have the null feature that many other languages have. Null is a value that means there is no value there. In languages with null , variables can always be in one of two states: null or not-null.
The problem isn't really with the concept but with the particular implementation. As such, Rust does not have nulls, but it does have an enum that can encode the concept of a value being present or absent. This enum is Option<T> , and it is defined by the standard library as follows:
enum Option<T> {
None,
Some(T),
}
In short, because Option<T> and T (where T can be any type) are different types, the compiler won't let us use an Option<T> value as if it were definitely a valid value. For example, this code won't compile because it's trying to add an i8 to an Option<i8> :
// !! NOT COMPILE !!
let x: i8 = 5;
let y: Option<i8> = Some(5);
let sum = x + y;
In other words, you have to convert an Option<T> to a T before you can perform T operations with it. Generally, this helps catch one of the most common issues with null: assuming that something isn't null when it actually is.
Eliminating the risk of incorrectly assuming a not-null value helps you to be more confident in your code. In order to have a value that can possibly be null, you must explicitly opt in by making the type of that value Option<T> . Then, when you use that value, you are required to explicitly handle the case when the value is null. Everywhere that a value has a type that isn't an Option<T> , you can safely assume that the value isn't null. This was a deliberate design decision for Rust to limit null's pervasiveness and increase the safety of Rust code.
The match Control Flow Construct
Rust has an extremely powerful control flow construct called match that allows you to compare a value against a series of patterns and then execute code based on which pattern matches. Patterns can be made up of literal values, variable names, wildcards, and many other things; The power of match comes from the expressiveness of the patterns and the fact that the compiler confirms that all possible cases are handled.
Think of a match expression as being like a coin-sorting machine: coins slide down a track with variously sized holes along it, and each coin falls through the first hole it encounters that it fits into. In the same way, values go through each pattern in a match , and at the first pattern the value “fits,” the value falls into the associated code block to be used during execution.
enum Coin {
Penny,
Nickel,
Dime,
Quarter,
}
fn value_in_cents(coin: Coin) -> u8 {
match coin {
Coin::Penny => 1,
Coin::Nickel => 5,
Coin::Dime => 10,
Coin::Quarter => 25,
}
}
This seems very similar to an expression used with if , but there's a big difference: with if , the expression needs to return a Boolean value, but here, it can return any type.
Patterns that Bind to Values
Another useful feature of match arms is that they can bind to the parts of the values that match the pattern. This is how we can extract values out of enum variants.
Catch-all Patterns and the _ Placeholder
Using enums, we can also take special actions for a few particular values, but for all other values take one default action. Imagine we're implementing a game where, if you roll a 3 on a dice roll, your player doesn't move, but instead gets a new fancy hat. If you roll a 7, your player loses a fancy hat. For all other values, your player moves that number of spaces on the game board. Here's a match that implements that logic, with the result of the dice roll hardcoded rather than a random value, and all other logic represented by functions without bodies because actually implementing them is out of scope for this example:
let dice_roll = 9;
match dice_roll {
3 => add_fancy_hat(),
7 => remove_fancy_hat(),
other => move_player(other),
}
fn add_fancy_hat() {}
fn remove_fancy_hat() {}
fn move_player(num_spaces: u8) {}
For the first two arms, the patterns are the literal values 3 and 7. For the last arm that covers every other possible value, the pattern is the variable we've chosen to name other . The code that runs for the other arm uses the variable by passing it to the move_player function.
Rust also has a pattern we can use when we want a catch-all but don't want to use the value in the catch-all pattern: _ is a special pattern that matches any value and does not bind to that value. This tells Rust we aren't going to use the value, so Rust won't warn us about an unused variable. Let's change the rules of the game: now, if you roll anything other than a 3 or a 7, you must roll again. We no longer need to use the catch-all value, so we can change our code to use _ instead of the variable named other :
let dice_roll = 9;
match dice_roll {
3 => add_fancy_hat(),
7 => remove_fancy_hat(),
_ => reroll(),
}
fn add_fancy_hat() {}
fn remove_fancy_hat() {}
fn reroll() {}
This example also meets the exhaustiveness requirement because we're explicitly ignoring all other values in the last arm; we haven't forgotten anything. Finally, we'll change the rules of the game one more time, so that nothing else happens on your turn if you roll anything other than a 3 or a 7. We can express that by using the unit value (the empty tuple type we mentioned in “The Tuple Type” section) as the code that goes with the _ arm:
let dice_roll = 9;
match dice_roll {
3 => add_fancy_hat(),
7 => remove_fancy_hat(),
_ => (),
}
fn add_fancy_hat() {}
fn remove_fancy_hat() {}
Here, we're telling Rust explicitly that we aren't going to use any other value that doesn't match a pattern in an earlier arm, and we don't want to run any code in this case.
Concise Control Flow with if let
The if let syntax lets you combine if and let into a less verbose way to handle values that match one pattern while ignoring the rest. Consider the program in example that matches on an Option<u8> value in the config_max variable but only wants to execute code if the value is the Some variant.
let config_max = Some(3u8);
match config_max {
Some(max) => println!("The maximum is configured to be {}", max),
_ => (),
}
If the value is Some , we print out the value in the Some variant by binding the value to the variable max in the pattern. We don't want to do anything with the None value. To satisfy the match expression, we have to add _ => () after processing just one variant, which is annoying boilerplate code to add.
Instead, we could write this in a shorter way using if let . The following code behaves the same as the match
let config_max = Some(3u8);
if let Some(max) = config_max {
println!("The maximum is configured to be {}", max);
}
The syntax if let takes a pattern and an expression separated by an equal sign. It works the same way as a match , where the expression is given to the match and the pattern is its first arm. In this case, the pattern is Some(max) , and the max binds to the value inside the Some . We can then use max in the body of the if let block in the same way as we used max in the corresponding match arm. The code in the if let block isn't run if the value doesn't match the pattern.
Using if let means less typing, less indentation, and less boilerplate code. However, you lose the exhaustive checking that match enforces. Choosing between match and if let depends on what you're doing in your particular situation and whether gaining conciseness is an appropriate trade-off for losing exhaustive checking.
In other words, you can think of if let as syntax sugar for a match that runs code when the value matches one pattern and then ignores all other values.
Packages and Crates
A crate is the smallest amount of code that the Rust compiler considers at a time. Even if you run rustc rather than cargo and pass a single source code file, the compiler considers that file to be a crate. Crates can contain modules, and the modules may be defined in other files that get compiled with the crate, as we'll see in the coming sections.
A crate can come in one of two forms: a binary crate or a library crate. Binary crates are programs you can compile to an executable that you can run, such as a command-line program or a server. Each must have a function called main that defines what happens when the executable runs. All the crates we've created so far have been binary crates.
Library crates don't have a main function, and they don't compile to an executable. Instead, they define functionality intended to be shared with multiple projects.
A package can contain as many binary crates as you like, but at most only one library crate. A package must contain at least one crate, whether that's a library or binary crate.
Grouping Related Code in Modules
Modules let us organize code within a crate for readability and easy reuse. Modules also allow us to control the privacy of items, because code within a module is private by default. Private items are internal implementation details not available for outside use. We can choose to make modules and the items within them public, which exposes them to allow external code to use and depend on them.
Earlier, we mentioned that src/main.rs and src/lib.rs are called crate roots. The reason for their name is that the contents of either of these two files form a module named crate at the root of the crate's module structure, known as the module tree.
Paths for Referring to an Item in the Module Tree
To show Rust where to find an item in a module tree, we use a path in the same way we use a path when navigating a filesystem. To call a function, we need to know its path.
A path can take two forms: • An absolute path is the full path starting from a crate root; for code from an external crate, the absolute path begins with the crate name, and for code from the current crate, it starts with the literal crate . • A relative path starts from the current module and uses self , super , or an identifier in the current module.
Both absolute and relative paths are followed by one or more identifiers separated by double colons :: .
mod front_of_house {
mod hosting {
fn add_to_waitlist() {}
}
}
pub fn eat_at_restaurant() {
// Absolute path
crate::front_of_house::hosting::add_to_waitlist();
// Relative path
front_of_house::hosting::add_to_waitlist();
}
The first time we call the add_to_waitlist function in eat_at_restaurant , we use an absolute path. The add_to_waitlist function is defined in the same crate as eat_at_restaurant , which means we can use the crate keyword to start an absolute path. We then include each of the successive modules until we make our way to add_to_waitlist . You can imagine a filesystem with the same structure: we'd specify the path /front_of_house/hosting/add_to_waitlist to run the add_to_waitlist program; using the crate name to start from the crate root is like using / to start from the filesystem root in your shell.
The second time we call add_to_waitlist in eat_at_restaurant , we use a relative path. The path starts with front_of_house , the name of the module defined at the same level of the module tree as eat_at_restaurant . Here the filesystem equivalent would be using the path front_of_house/hosting/add_to_waitlist . Starting with a module name means that the path is relative.
Items in a parent module can't use the private items inside child modules, but items in child modules can use the items in their ancestor modules. This is because child modules wrap and hide their implementation details, but the child modules can see the context in which they're defined. To continue with our metaphor, think of the privacy rules as being like the back office of a restaurant: what goes on in there is private to restaurant customers, but office managers can see and do everything in the restaurant they operate.
Rust chose to have the module system function this way so that hiding inner implementation details is the default. That way, you know which parts of the inner code you can change without breaking outer code. However, Rust does give you the option to expose inner parts of child modules' code to outer ancestor modules by using the pub keyword to make an item public.
Exposing Paths with the pub Keyword
In the absolute path, we start with crate , the root of our crate's module tree. The front_of_house module is defined in the crate root. While front_of_house isn't public, because the eat_at_restaurant function is defined in the same module as front_of_house (that is, eat_at_restaurant and front_of_house are siblings), we can refer to front_of_house from eat_at_restaurant . Next is the hosting module marked with pub . We can access the parent
module of hosting , so we can access hosting . Finally, the add_to_waitlist function is marked with pub and we can access its parent module, so this function call works!
In the relative path, the logic is the same as the absolute path except for the first step: rather than starting from the crate root, the path starts from front_of_house . The front_of_house module is defined within the same module as eat_at_restaurant , so the relative path starting from the module in which eat_at_restaurant is defined works. Then, because hosting and add_to_waitlist are marked with pub , the rest of the path works, and this function call is valid!
Best Practices for Packages with a Binary and a Library
We mentioned a package can contain both a src/main.rs binary crate root as well as a src/lib.rs library crate root, and both crates will have the package name by default. Typically, packages with this pattern of containing both a library and a binary crate will have just enough code in the binary crate to start an executable that calls code with the library crate. This lets other projects benefit from the most functionality that the package provides, because the library crate's code can be shared.
The module tree should be defined in src/lib.rs . Then, any public items can be used in the binary crate by starting paths with the name of the package. The binary crate becomes a user of the library crate just like a completely external crate would use the library crate: it can only use the public API. This helps you design a good API; not only are you the author, you're also a client!
Starting Relative Paths with super
We can construct relative paths that begin in the parent module, rather than the current module or the crate root, by using super at the start of the path. This is like starting a filesystem path with the .. syntax. Using super allows us to reference an item that we know is in the parent module, which can make rearranging the module tree easier when the module is closely related to the parent, but the parent might be moved elsewhere in the module tree someday.
Consider the code in Listing 7-8 that models the situation in which a chef fixes an incorrect order and personally brings it out to the customer. The function fix_incorrect_order defined in the back_of_house module calls the function deliver_order defined in the parent module by specifying the path to deliver_order starting with super :
fn deliver_order() {}
mod back_of_house {
fn fix_incorrect_order() {
cook_order();
super::deliver_order();
}
fn cook_order() {}
}
The fix_incorrect_order function is in the back_of_house module, so we can use super to go to the parent module of back_of_house , which in this case is crate , the root. From there, we look for deliver_order and find it.成功! We think the back_of_house module and the deliver_order function are likely to stay in the same relationship to each other and get moved together should we decide to reorganize the crate's module tree. Therefore, we used super so we'll have fewer places to update code in the future if this code gets moved to a different module.
Making Structs and Enums Public
We can also use pub to designate structs and enums as public, but there are a few details extra to the usage of pub with structs and enums. If we use pub before a struct definition, we make the struct public, but the struct's fields will still be private. We can make each field public or not on a case-by-case basis. In example, we've defined a public back_of_house::Breakfast struct with a public toast field but a private seasonal_fruit field. This models the case in a restaurant where the customer can pick the type of bread that comes with a meal, but the chef decides which fruit accompanies the meal based on what's in season and in stock. The available fruit changes quickly, socustomers can't choose the fruit or even see which fruit they'll get.
mod back_of_house {
pub struct Breakfast {
pub toast: String,
seasonal_fruit: String,
}
impl Breakfast {
pub fn summer(toast: &str) -> Breakfast {
Breakfast {
toast: String::from(toast),
seasonal_fruit: String::from("peaches"),
}
}
}
}
pub fn eat_at_restaurant() {
// Order a breakfast in the summer with Rye toast
let mut meal = back_of_house::Breakfast::summer("Rye");
// Change our mind about what bread we'd like
meal.toast = String::from("Wheat");
println!("I'd like {} toast please", meal.toast);
// The next line won't compile if we uncomment it; we're not allowed
// to see or modify the seasonal fruit that comes with the meal
// meal.seasonal_fruit = String::from("blueberries");
In contrast, if we make an enum public, all of its variants are then public. We only need the pub before the enum keyword,
Because we made the Appetizer enum public, we can use the Soup and Salad variants in eat_at_restaurant . Enums aren't very useful unless their variants are public; it would be annoying to have to annotate all enum variants with pub in every case, so the default for enum variants is to be public. Structs are often useful without their fields being public, so struct fields follow the general rule of everything being private by default unless annotated with pub .
Bringing Paths into Scope with the use Keyword
Having to write out the paths to call functions can feel inconvenient and repetitive. whether we chose the absolute or relative path to the add_to_waitlist function, every time we wanted to call add_to_waitlist we had to specify front_of_house and hosting too. Fortunately, there's a way to simplify this process: we can create a shortcut to a path with the use keyword once, and then use the shorter name everywhere else in the scope. we bring the crate::front_of_house::hosting module into the scope of the eat_at_restaurant function so we only have to specify hosting::add_to_waitlist to call the add_to_waitlist function in eat_at_restaurant .
mod front_of_house {
pub mod hosting {
pub fn add_to_waitlist() {}
}
}
use crate::front_of_house::hosting;
pub fn eat_at_restaurant() {
hosting::add_to_waitlist();
}
Adding use and a path in a scope is similar to creating a symbolic link in the filesystem. By adding use crate::front_of_house::hosting in the crate root, hosting is now a valid name in that scope, just as though the hosting module had been defined in the crate root. Paths brought into scope with use also check privacy, like any other paths.
Providing New Names with the as Keyword
There's another solution to the problem of bringing two types of the same name into the same scope with use : after the path, we can specify as and a new local name, or alias, for the type.
use std::fmt::Result;
use std::io::Result as IoResult;
fn function1() -> Result {
// --snip--
}
Re-exporting Names with pub use
When we bring a name into scope with the use keyword, the name available in the new scope is private. To enable the code that calls our code to refer to that name as if it had been defined in that code's scope, we can combine pub and use . This technique is called re-exporting because we're bringing an item into scope but also making that item available for others to bring into their scope.
mod front_of_house {
pub mod hosting {
pub fn add_to_waitlist() {}
}
}
pub use crate::front_of_house::hosting;
pub fn eat_at_restaurant() {
hosting::add_to_waitlist();
}
Using Nested Paths to Clean Up Large use Lists If we're using multiple items defined in the same crate or same module, listing each item on its own line can take up a lot of vertical space in our files.
// --snip--
use std::cmp::Ordering;
use std::io;
// --snip--
Instead, we can use nested paths to bring the same items into scope in one line. We do this by specifying the common part of the path, followed by two colons, and then curly brackets around a list of the parts of the paths that differ,
// --snip--
use std::{cmp::Ordering, io};
// --snip
We can use a nested path at any level in a path, which is useful when combining two use statements that share a subpath.
use std::io;
use std::io::Write;
The common part of these two paths is std::io , and that's the complete first path. To merge these two paths into one use statement, we can use self in the nested path,
use std::io::{self, Write};
This line brings std::io and std::io::Write into scope.
The Glob Operator
If we want to bring all public items defined in a path into scope, we can specify that path followed by the * glob operator:
use std::collections::*;
This use statement brings all public items defined in std::collections into the current scope. Be careful when using the glob operator! Glob can make it harder to tell what names are in scope and where a name used in your program was defined.
Common Collections
Unlike the built- in array and tuple types, the data these collections point to is stored on the heap, which means the amount of data does not need to be known at compile time and can grow or shrink as the program runs.
• A vector allows you to store a variable number of values next to each other.
• A string is a collection of characters. We've mentioned the String type previously, but in this chapter we'll talk about it in depth.
• A hash map allows you to associate a value with a particular key. It's a particular implementation of the more general data structure called a map.
Creating a New Vector
let v: Vec<i32> = Vec::new();
Note that we added a type annotation here. Because we aren't inserting any values into this vector, Rust doesn't know what kind of elements we intend to store. This is an important point. Vectors are implemented using generics;
let v = vec![1, 2, 3];
Updating a Vector
To create a vector and then add elements to it, we can use the push method,
let mut v = Vec::new();
v.push(5);
v.push(6);
v.push(7);
v.push(8);
Reading Elements of Vectors
There are two ways to reference a value stored in a vector: via indexing or using the get method. In the following examples, we've annotated the types of the values that are returned from these functions for extra clarity.
let v = vec![1, 2, 3, 4, 5];
let third: &i32 = &v[2];
println!("The third element is {}", third);
let third: Option<&i32> = v.get(2);
match third {
Some(third) => println!("The third element is {}", third),
None => println!("There is no third element."),
}
Note a few details here. We use the index value of 2 to get the third element because vectors are indexed by number, starting at zero. Using & and [] gives us a reference to the element at the index value. When we use the get method with the index passed as an argument, we get an Option<&T> that we can use with match .
The reason Rust provides these two ways to reference an element is so you can choose how the program behaves when you try to use an index value outside the range of existing elements. As an example, let's see what happens when we have a vector of five elements and then we try to access an element at index 100 with each technique,
let v = vec![1, 2, 3, 4, 5];
let does_not_exist = &v[100];
let does_not_exist = v.get(100);
When we run this code, the first [] method will cause the program to panic because it references a nonexistent element. This method is best used when you want your program to crash if there's an attempt to access an element past the end of the vector.
When the get method is passed an index that is outside the vector, it returns None without panicking. You would use this method if accessing an element beyond the range of the vector may happen occasionally under normal circumstances.
Using an Enum to Store Multiple Types
Vectors can only store values that are the same type.
For example, say we want to get values from a row in a spreadsheet in which some of the columns in the row contain integers, some floating-point numbers, and some strings. We can define an enum whose variants will hold the different value types, and all the enum variants will be considered the same type: that of the enum. Then we can create a vector to hold that enum and so, ultimately, holds different types.
enum SpreadsheetCell {
Int(i32),
Float(f64),
Text(String),
}
let row = vec![
SpreadsheetCell::Int(3),
SpreadsheetCell::Text(String::from("blue")),
SpreadsheetCell::Float(10.12),
];
Rust needs to know what types will be in the vector at compile time so it knows exactly how much memory on the heap will be needed to store each element. We must also be explicit about what types are allowed in this vector. If Rust allowed a vector to hold any type, there would be a chance that one or more of the types would cause errors with the operations performed on the elements of the vector. Using an enum plus a match expression means that Rust will ensure at compile time that every possible case is handled
If you don't know the exhaustive set of types a program will get at runtime to store in a vector, the enum technique won't work. Instead, you can use a trait object,
What Is a String? The String type, which is provided by Rust's standard library rather than coded into the core language, is a growable, mutable, owned, UTF-8 encoded string type. When Rustaceans refer to “strings” in Rust, they might be referring to either the String or the string slice &str types, not just one of those types. Although this section is largely about String , both types are used heavily in Rust's standard library, and both String and string slices are UTF-8 encoded.
Creating a New String
Many of the same operations available with Vec<T> are available with String as well, because String is actually implemented as a wrapper around a vector of bytes with some extra guarantees, restrictions, and capabilities.
let mut s = String::new();
This line creates a new empty string called s , which we can then load data into. Often, we'll have some initial data that we want to start the string with. For that, we use the to_string method, which is available on any type that implements the Display trait,
Bytes and Scalar Values and Grapheme Clusters! Oh My!
Another point about UTF-8 is that there are actually three relevant ways to look at strings from Rust's perspective: as bytes, scalar values, and grapheme clusters (the closest thing to what we would call letters).
If we look at the Hindi word “नम�ते” written in the Devanagari script, it is stored as a vector of u8 values that looks like this:
[224, 164, 168, 224, 164, 174, 224, 164, 184, 224, 165, 141, 224, 164, 164,
224, 165, 135]
That's 18 bytes and is how computers ultimately store this data. If we look at them as Unicode scalar values, which are what Rust's char type is, those bytes look like this:
['न', 'म', 'स', '◌्', 'त', '◌े']
There are six char values here, but the fourth and sixth are not letters: they're diacritics that don't make sense on their own. Finally, if we look at them as grapheme clusters, we'd get what a person would call the four letters that make up the Hindi word:
["न", "म", "स्", "ते"]
Rust provides different ways of interpreting the raw string data that computers store so that each program can choose the interpretation it needs, no matter what human language the data is in.
A final reason Rust doesn't allow us to index into a String to get a character is that indexing operations are expected to always take constant time (O(1)). But it isn't possible to guarantee that performance with a String , because Rust would have to walk through the contents from the beginning to the index to determine how many valid characters there were.
Slicing Strings
Rather than indexing using [ ] with a single number, you can use [] with a range to create a string slice containing particular bytes:
Methods for Iterating Over Strings
The best way to operate on pieces of strings is to be explicit about whether you want characters or bytes. For individual Unicode scalar values, use the chars method. Calling chars on “Зд” separates out and returns two values of type char , and you can iterate over the result to access each element:
for c in "Зд".chars() {
println!("{}", c);
}
This code will print the following:
З
д
Alternatively, the bytes method returns each raw byte, which might be appropriate for your domain:
for b in "Зд".bytes() {
println!("{}", b);
}
This code will print the four bytes that make up this string:
208
151
208
180
But be sure to remember that valid Unicode scalar values may be made up of more than 1 byte . Getting grapheme clusters from strings as with the Devanagari script is complex, so this functionality is not provided by the standard library.
Storing Keys with Associated Values in Hash Maps The last of our common collections is the hash map. The type HashMap<K, V> stores a mapping of keys of type K to values of type V using a hashing function, which determines how it places these keys and values into memory. Many programming languages support this kind of data structure, but they often use a different name, such as hash, map, object, hash table, dictionary, or associative array, just to name a few.
Updating a Hash Map
Although the number of key and value pairs is growable, each unique key can only have one value associated with it at a time (but not vice versa: for example, both the Blue team and the Yellow team could have value 10 stored in the scores hash map).
When you want to change the data in a hash map, you have to decide how to handle the case when a key already has a value assigned. You could replace the old value with the new value, completely disregarding the old value. You could keep the old value and ignore the new value, only adding the new value if the key doesn't already have a value. Or you could combine the old value and the new value. Let's look at how to do each of these!
Adding a Key and Value Only If a Key Isn't Present
Hash maps have a special API for this called entry that takes the key you want to check as a parameter. The return value of the entry method is an enum called Entry that represents a value that might or might not exist. Let's say we want to check whether the key for the Yellow team has a value associated with it. If it doesn't, we want to insert the value 50 , and the same for the Blue team.
use std::collections::HashMap;
let mut scores = HashMap::new();
scores.insert(String::from("Blue"), 10);
scores.entry(String::from("Yellow")).or_insert(50);
scores.entry(String::from("Blue")).or_insert(50);
println!("{:?}", scores);
Rust groups errors into two major categories: recoverable and unrecoverable errors. For a recoverable error, such as a file not found error, we most likely just want to report the problem to the user and retry the operation. Unrecoverable errors are always symptoms of bugs, like trying to access a location beyond the end of an array, and so we want to immediately stop the program.
Most languages don't distinguish between these two kinds of errors and handle both in the same way, using mechanisms such as exceptions. Rust doesn't have exceptions. Instead, it has the type Result<T, E> for recoverable errors and the panic! macro that stops execution when the program encounters an unrecoverable error. This chapter covers calling panic! first and then talks about returning Result<T, E> values.
Unwinding the Stack or Aborting in Response to a Panic
By default, when a panic occurs, the program starts unwinding, which means Rust walks back up the stack and cleans up the data from each function it encounters. However, this walking back and cleanup is a lot of work. Rust, therefore, allows you to choose the alternative of immediately aborting, which ends the program without cleaning up.
Memory that the program was using will then need to be cleaned up by the operating system. If in your project you need to make the resulting binary as small as possible, you can switch from unwinding to aborting upon a panic by adding panic = 'abort' to the appropriate [profile] sections in your Cargo.toml file. For example, if you want to abort on panic in release mode, add this:
[profile.release]
panic = 'abort'
Using a panic! Backtrace
A backtrace is a list of all the functions that have been called to get to this point. Backtraces in Rust work as they do in other languages: the key to reading the backtrace is to start from the top and read until you see files you wrote. That's the spot where the problem originated. The lines above that spot are code that your code has called; the lines below are code that called your code. These before-and-after lines might include core Rust code, standard library code, or crates that you're using. Let's try getting a backtrace by setting the RUST_BACKTRACE environment variable to any value except 0.
$ RUST_BACKTRACE=1 cargo run
thread 'main' panicked at 'index out of bounds: the len is 3 but the index is 99',
src/main.rs:4:5 stack backtrace:
0: rust_begin_unwindat
/rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/std/src/panicking.rs:483
1: core::panicking::panic_fmt at
/rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/core/src/panicking.rs:85
2: core::panicking::panic_bounds_check at
/rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/core/src
Recoverable Errors with Result
enum Result<T, E> {
Ok(T),
Err(E),
}
The T and E are generic type parameters:
Matching on Different Errors
use std::fs::File;
use std::io::ErrorKind;
fn main() {
let greeting_file_result = File::open("hello.txt");
let greeting_file = match greeting_file_result {
Ok(file) => file,
Err(error) => match error.kind() {
ErrorKind::NotFound => match File::create("hello.txt") {
Ok(fc) => fc,
Err(e) => panic!("Problem creating the file: {:?}", e),
},
other_error => {
panic!("Problem opening the file: {:?}", other_error);
}
},
};
}
Propagating Errors
When a function's implementation calls something that might fail, instead of handling the error within the function itself, you can return the error to the calling code so that it can decide what to do. This is known as propagating the error and gives more control to the calling code, where there might be more information or logic that dictates how the error should be handled than what you have available in the context of your code.
A Shortcut for Propagating Errors: the ?オペレーター
use std::fs::File;
use std::io;
use std::io::Read;
fn read_username_from_file() -> Result<String, io::Error> {
let mut username_file = File::open("hello.txt")?;
let mut username = String::new();
username_file.read_to_string(&mut username)?;
Ok(username)
}
The ? placed after a Result value is defined to work in almost the same way as the match expressions we defined to handle the Result values . If the value of the Result is an Ok , the value inside the Ok will get returned from this expression, and the program will continue. If the value is an Err , the Err will be returned from the whole function as if we had used the return keyword so the error value gets propagated to the calling code.
There is a difference between what the match expression from last example does and what the ? operator does: error values that have the ? operator called on them go through the from function, defined in the From trait in the standard library, which is used to convert values from one type into another. When the ? operator calls the from function, the error type received is converted into the error type defined in the return type of the current function. This is useful when a function returns one error type to represent all the ways a function might fail, even if parts might fail for many different reasons.
Where The ? Operator Can Be Used
The ? operator can only be used in functions whose return type is compatible with the value the ? is used on. This is because the ? operator is defined to perform an early return of a value out of the function, in the same manner as the match expression
This error points out that we're only allowed to use the ? operator in a function that returns Result , Option , or another type that implements From Residual .
Note that you can use the ? operator on a Result in a function that returns Result , and you can use the ? operator on an Option in a function that returns Option , but you can't mix and match. The ? operator won't automatically convert a Result to an Option or vice versa; in those cases, you can use methods like the ok method on Result or the ok_or method on Option to do the conversion explicitly.
When a main function returns a Result<(), E> , the executable will exit with a value of 0 if main returns Ok(()) and will exit with a nonzero value if main returns an Err value. Executables written in C return integers when they exit: programs that exit successfully return the integer 0 , and programs that error return some integer other than 0 . Rust also returns integers from executables to be compatible with this convention.
The main function may return any types that implement the std::process::Termination trait, which contains a function report that returns an ExitCode Consult the standard library documentation for more information on implementing the Termination trait for your own types.
Generic Types, Traits, and Lifetimes
Every programming language has tools for effectively handling the duplication of concepts. In Rust, one such tool is generics: abstract stand-ins for concrete types or other properties. We can express the behavior of generics or how they relate to other generics without knowing what will be in their place when compiling and running the code.
struct Point<T> {
x: T,
y: T,
}
impl<T> Point<T> {
fn x(&self) -> &T {
&self.x
}
}
Traits: Defining Shared Behavior
A trait defines functionality a particular type has and can share with other types. We can use traits to define shared behavior in an abstract way. We can use trait bounds to specify that a generic type can be any type that has certain behavior.
Note: Traits are similar to a feature often called interfaces in other languages, although with some differences.
pub trait Summary {
fn summarize(&self) -> String;
}
Here, we declare a trait using the trait keyword and then the trait's name, which is Summary in this case. We've also declared the trait as pub so that crates depending on this crate can make use of this trait too, as we'll see in a few examples. Inside the curly brackets, we declare the method signatures that describe the behaviors of the types that implement this trait, which in this case is fn summarize(&self) -> String .
A trait can have multiple methods in its body: the method signatures are listed one per line and each line ends in a semicolon.
Implementing a Trait on a Type
pub struct NewsArticle {
pub headline: String,
pub location: String,
pub author: String,
pub content: String,
}
impl Summary for NewsArticle {
fn summarize(&self) -> String {
format!("{}, by {} ({})", self.headline, self.author, self.location)
}
}
pub struct Tweet {
pub username: String,
pub content: String,
pub reply: bool,
pub retweet: bool,
}
impl Summary for Tweet {
fn summarize(&self) -> String {
format!("{}: {}", self.username, self.content)
}
}
Now that the library has implemented the Summary trait on NewsArticle and Tweet , users of the crate can call the trait methods on instances of NewsArticle and Tweet in the same way we call regular methods. The only difference is that the user must bring the trait into scope as well as the types.
use aggregator::{Summary, Tweet};
fn main() {
let tweet = Tweet {
username: String::from("horse_ebooks"),
content: String::from(
"of course, as you probably already know, people",
),
reply: false,
retweet: false,
};
println!("1 new tweet: {}", tweet.summarize());
}
Other crates that depend on the aggregator crate can also bring the Summary trait into scope to implement Summary on their own types. One restriction to note is that we can implement a trait on a type only if at least one of the trait or the type is local to our crate. For example, we can implement standard library traits like Display on a custom type like Tweet as part of our aggregator crate functionality, because the type Tweet is local to our aggregator crate. We can also implement Summary on Vec<T> in our aggregator crate, because the trait Summary is local to our aggregator crate.
But we can't implement external traits on external types. For example, we can't implement the Display trait on Vec<T> within our aggregator crate, because Display and Vec<T> are both defined in the standard library and aren't local to our aggregator crate. This restriction is part of a property called coherence, and more specifically the orphan rule, so named because the parent type is not present. This rule ensures that other people's code can't break your code and vice versa. Without the rule, two crates could implement the same trait for the same type, and Rust wouldn't know which implementation to use.
Default Implementations
Sometimes it's useful to have default behavior for some or all of the methods in a trait instead of requiring implementations for all methods on every type. Then, as we implement the trait on a particular type, we can keep or override each method's default behavior.
pub trait Summary {
fn summarize(&self) -> String {
String::from("(Read more...)")
}
}
To use a default implementation to summarize instances of NewsArticle , we specify an empty impl block with
impl Summary for NewsArticle {}
Default implementations can call other methods in the same trait, even if those other methods don't have a default implementation. In this way, a trait can provide a lot of useful functionality and only require implementors to specify a small part of it. For example, we could define the Summary trait to have a summarize_author method whose implementation is required, and then define a summarize method that has a default implementation that calls the summarize_author method:
pub trait Summary {
fn summarize_author(&self) -> String;
fn summarize(&self) -> String {
format!("(Read more from {}...)", self.summarize_author())
}
}
Note that it isn't possible to call the default implementation from an overriding implementation of that same method.
Traits as Parameters
pub fn notify(item: &impl Summary) {
println!("Breaking news! {}", item.summarize());
}
Instead of a concrete type for the item parameter, we specify the impl keyword and the trait name. This parameter accepts any type that implements the specified trait. In the body of notify , we can call any methods on item that come from the Summary trait, such as summarize . We can call notify and pass in any instance of NewsArticle or Tweet . Code that calls the function with any other type, such as a String or an i32 , won't compile because those types don't implement Summary .
Trait Bound Syntax
The impl Trait syntax works for straightforward cases but is actually syntax sugar for a longer form known as a trait bound; it looks like this:
pub fn notify<T: Summary>(item: &T) {
println!("Breaking news! {}", item.summarize());
}
This longer form is equivalent to the example in the previous section but is more verbose. We place trait bounds with the declaration of the generic type parameter after a colon and inside angle brackets.
The impl Trait syntax is convenient and makes for more concise code in simple cases, while the fuller trait bound syntax can express more complexity in other cases. For example, we can have two parameters that implement Summary . Doing so with the impl Trait syntax looks like this:
pub fn notify(item1: &impl Summary, item2: &impl Summary) {
Using impl Trait is appropriate if we want this function to allow item1 and item2 to have different types (as long as both types implement Summary ). If we want to force both parameters to have the same type, however, we must use a trait bound, like this:
pub fn notify<T: Summary>(item1: &T, item2: &T) {
The generic type T specified as the type of the item1 and item2 parameters constrains the function such that the concrete type of the value passed as an argument for item1 and item2 must be the same.
Specifying Multiple Trait Bounds with the + Syntax
We can also specify more than one trait bound. Say we wanted notify to use display formatting as well as summarize on item : we specify in the notify definition that item must implement both Display and Summary . We can do so using the + syntax:
pub fn notify(item: &(impl Summary + Display)) {
The + syntax is also valid with trait bounds on generic types:
pub fn notify<T: Summary + Display>(item: &T) {
With the two trait bounds specified, the body of notify can call summarize and use {} to format item .
Clearer Trait Bounds with where Clauses
Using too many trait bounds has its downsides. Each generic has its own trait bounds, so functions with multiple generic type parameters can contain lots of trait bound information between the function's name and its parameter list, making the function signature hard to read. For this reason, Rust has alternate syntax for specifying trait bounds inside a where clause after the function signature. So instead of writing this:
fn some_function<T: Display + Clone, U: Clone + Debug>(t: &T, u: &U) -> i32 {
we can use a where clause, like this:
fn some_function<T, U>(t: &T, u: &U) -> i32
where T: Display + Clone,
U: Clone + Debug
{
Returning Types that Implement Traits
We can also use the impl Trait syntax in the return position to return a value of some type that implements a trait, as shown here:
fn returns_summarizable() -> impl Summary {
Tweet {
username: String::from("horse_ebooks"),
content: String::from(
"of course, as you probably already know, people",
),
reply: false,
retweet: false,
}
}
The ability to specify a return type only by the trait it implements is especially useful in the context of closures and iterators. Closures and iterators create types that only the compiler knows or types that are very long to specify. The impl Trait syntax lets you concisely specify that a function returns some type that implements the Iterator trait without needing to write out a very long type.
However, you can only use impl Trait if you're returning a single type.
Using Trait Bounds to Conditionally Implement Methods
use std::fmt::Display;
struct Pair<T> {
x: T,
y: T,
}
impl<T> Pair<T> {
fn new(x: T, y: T) -> Self {
Self { x, y }
}
}
impl<T: Display + PartialOrd> Pair<T> {
fn cmp_display(&self) {
if self.x >= self.y {
println!("The largest member is x = {}", self.x);
} else {
println!("The largest member is y = {}", self.y);
}
}
}
We can also conditionally implement a trait for any type that implements another trait. Implementations of a trait on any type that satisfies the trait bounds are called blanket implementations and are extensively used in the Rust standard library.
Lifetime annotations don't change how long any of the references live. Rather, they describe the relationships of the lifetimes of multiple references to each other without affecting the lifetimes. Just as functions can accept any type when the signature specifies a generic type parameter, functions can accept references with any lifetime by specifying a generic lifetime parameter.
Lifetime annotations have a slightly unusual syntax: the names of lifetime parameters must start with an apostrophe ( ' ) and are usually all lowercase and very short, like generic types. Most people use the name 'a for the first lifetime annotation. We place lifetime parameter annotations after the & of a reference, using a space to separate the annotation from the reference's type.
&i32 // a reference
&'a i32 // a reference with an explicit lifetime
&'a mut i32 // a mutable reference with an explicit lifetime
Lifetime Annotations in Function Signatures
To use lifetime annotations in function signatures, we need to declare the generic lifetime parameters inside angle brackets between the function name and the parameter list, just as we did with generic type parameters.
fn longest<'a>(x: &'a str, y: &'a str) -> &'a str {
if x.len() > y.len() {
x
} else {
y
}
}
The function signature now tells Rust that for some lifetime 'a , the function takes two parameters, both of which are string slices that live at least as long as lifetime 'a . The function signature also tells Rust that the string slice returned from the function will live at least as long as lifetime 'a . In practice, it means that the lifetime of the reference returned by the longest function is the same as the smaller of the lifetimes of the values referred to by the function arguments.
Remember, when we specify the lifetime parameters in this function signature, we're not changing the lifetimes of any values passed in or returned. Rather, we're specifying that the borrow checker should reject any values that don't adhere to these constraints. Note that the longest function doesn't need to know exactly how long x and y will live, only that some scope can be substituted for 'a that will satisfy this signature.
When annotating lifetimes in functions, the annotations go in the function signature, not in the function body. The lifetime annotations become part of the contract of the function, much like the types in the signature. Having function signatures contain the lifetime contract means the analysis the Rust compiler does can be simpler. If there's a problem with the way a function is annotated or the way it is called, the compiler errors can point to the part of our code and the constraints more precisely. If, instead, the Rust compiler made more inferences about what we intended the relationships of the lifetimes to be, the compiler might only be able to point to a use of our code many steps away from the cause of the problem.
When we pass concrete references to longest , the concrete lifetime that is substituted for 'a is the part of the scope of x that overlaps with the scope of y . In other words, the generic lifetime 'a will get the concrete lifetime that is equal to the smaller of the lifetimes of x and y . Because we've annotated the returned reference with the same lifetime parameter 'a , the returned reference will also be valid for the length of the smaller of the lifetimes of x and y .
Ultimately, lifetime syntax is about connecting the lifetimes of various parameters and return values of functions. Once they're connected, Rust has enough information to allow memory-safe operations and disallow operations that would create dangling pointers or otherwise violate memory safety.
Lifetime Annotations in Struct Definitions
So far, the structs we've defined all hold owned types. We can define structs to hold references, but in that case we would need to add a lifetime annotation on every reference in the struct's definition.
struct ImportantExcerpt<'a> {
part: &'a str,
}
This struct has the single field part that holds a string slice, which is a reference. As with generic data types, we declare the name of the generic lifetime parameter inside angle brackets after the name of the struct so we can use the lifetime parameter in the body of the struct definition. This annotation means an instance of Important Excerpt can't outlive the reference it holds in its part field.
After writing a lot of Rust code, the Rust team found that Rust programmers were entering the same lifetime annotations over and over in particular situations. These situations were predictable and followed a few deterministic patterns. The developers programmed these patterns into the compiler's code so the borrow checker could infer the lifetimes in these situations and wouldn't need explicit annotations.
This piece of Rust history is relevant because it's possible that more deterministic patterns will emerge and be added to the compiler. In the future, even fewer lifetime annotations might be required.
The patterns programmed into Rust's analysis of references are called the lifetime elision rules. These aren't rules for programmers to follow; they're a set of particular cases that the compiler will consider, and if your code fits these cases, you don't need to write the lifetimes explicitly.
Lifetimes on function or method parameters are called input lifetimes, and lifetimes on return values are called output lifetimes.
The compiler uses three rules to figure out the lifetimes of the references when there aren't explicit annotations. The first rule applies to input lifetimes, and the second and third rules apply to output lifetimes. If the compiler gets to the end of the three rules and there are still references for which it can't figure out lifetimes, the compiler will stop with an error.
The first rule is that the compiler assigns a lifetime parameter to each parameter that's a reference. In other words, a function with one parameter gets one lifetime parameter: fn foo<'a>(x: &'a i32) ; a function with two parameters gets two separate lifetime parameters: fn foo<'a, 'b>(x: &'a i32, y: &'b i32) ;等々。
The second rule is that, if there is exactly one input lifetime parameter, that lifetime is assigned to all output lifetime parameters: fn foo<'a>(x: &'a i32) -> &'a i32 .
The third rule is that, if there are multiple input lifetime parameters, but one of them is &self or &mut self because this is a method, the lifetime of self is assigned to all output lifetime parameters. This third rule makes methods much nicer to read and write because fewer symbols are necessary.
Because the third rule really only applies in method signatures, we'll look at lifetimes in that context next to see why the third rule means we don't have to annotate lifetimes in method signatures very often.
Lifetime Annotations in Method Definitions
Lifetime names for struct fields always need to be declared after the impl keyword and then used after the struct's name, because those lifetimes are part of the struct's type.
In method signatures inside the impl block, references might be tied to the lifetime of references in the struct's fields, or they might be independent. In addition, the lifetime elision rules often make it so that lifetime annotations aren't necessary in method signatures.
impl<'a> ImportantExcerpt<'a> {
fn level(&self) -> i32 {
3
}
}
The Static Lifetime
One special lifetime we need to discuss is 'static , which denotes that the affected reference can live for the entire duration of the program.
let s: &'static str = "I have a static lifetime.";
The text of this string is stored directly in the program's binary, which is always available. Therefore, the lifetime of all string literals is 'static .
You might see suggestions to use the 'static lifetime in error messages. But before specifying 'static as the lifetime for a reference, think about whether the reference you have actually lives the entire lifetime of your program or not, and whether you want it to. Most of the time, an error message suggesting the 'static lifetime results from attempting to create a dangling reference or a mismatch of the available lifetimes. In such cases, the solution is fixing those problems, not specifying the 'static lifetime.
Generic Type Parameters, Trait Bounds, and Lifetimes Together
use std::fmt::Display;
fn longest_with_an_announcement<'a, T>(
x: &'a str,
y: &'a str,
ann: T,
) -> &'a str
where
T: Display,
{
println!("Announcement! {}", ann);
if x.len() > y.len() {
x
} else {
y
}
}
Checking for Panics with should_panic
We do this by adding the attribute should_panic to our test function. The test passes if the code inside the function panics; the test fails if the code inside the function doesn't panic.
pub struct Guess {
value: i32,
}
impl Guess {
pub fn new(value: i32) -> Guess {
if value < 1 || value > 100 {
panic!("Guess value must be between 1 and 100, got {}.", value);
}
Guess { value }
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
#[should_panic]
fn greater_than_100() {
Guess::new(200);
}
}
Tests that use should_panic can be imprecise. A should_panic test would pass even if the test panics for a different reason from the one we were expecting. To make should_panic tests more precise, we can add an optional expected parameter to the should_panic attribute. The test harness will make sure that the failure message contains the provided text. For example, consider the modified code for Guess, where the new function panics with different messages depending on whether the value is too small or too large.
// --snip--
#[cfg(test)]
mod tests {
use super::*;
#[test]
#[should_panic(expected = "less than or equal to 100")]
fn greater_than_100() {
Guess::new(200);
}
}
Using Result<T, E> in Tests
#[cfg(test)]
mod tests {
#[test]
fn it_works() -> Result<(), String> {
if 2 + 2 == 4 {
Ok(())
} else {
Err(String::from("two plus two does not equal four"))
}
}
}
Writing tests so they return a Result<T, E> enables you to use the question mark operator in the body of tests, which can be a convenient way to write tests that should fail if any operation within them returns an Err variant.
You can't use the #[should_panic] annotation on tests that use Result<T, E> . To assert that an operation returns an Err variant, don't use the question mark operator on the Result<T, E> value. Instead, use assert!(value.is_err()) .
Running Tests in Parallel or Consecutively
When you run multiple tests, by default they run in parallel using threads, meaning they finish running faster and you get feedback quicker. Because the tests are running at the same time, you must make sure your tests don't depend on each other or on any shared state, including a shared environment, such as the current working directory or environment variables.
If you don't want to run the tests in parallel or if you want more fine-grained control over the number of threads used, you can send the --test-threads flag and the number of threads you want to use to the test binary. Take a look at the following example:
$ cargo test -- --test-threads=1
We set the number of test threads to 1 , telling the program not to use any parallelism. Running the tests using one thread will take longer than running them in parallel, but the tests won't interfere with each other if they share state.
Showing Function Output
By default, if a test passes, Rust's test library captures anything printed to standard output. For example, if we call println! in a test and the test passes, we won't see the println! output in the terminal; we'll see only the line that indicates the test passed. If a test fails, we'll see whatever was printed to standard output with the rest of the failure message.
If we want to see printed values for passing tests as well, we can tell Rust to also show the output of successful tests with --show-output .
$ cargo test -- --show-output
Running Single Tests
We can pass the name of any test function to cargo test to run only that test:
$ cargo test one_hundred
Filtering to Run Multiple Tests
We can specify part of a test name, and any test whose name matches that value will be run. For example, because two of our tests' names contain add , we can run those two by running cargo test add :
$ cargo test add
Ignoring Some Tests Unless Specifically Requested
Sometimes a few specific tests can be very time-consuming to execute, so you might want to exclude them during most runs of cargo test . Rather than listing as arguments all tests you do want to run, you can instead annotate the time-consuming tests using the ignore attribute to exclude them, as shown here:
#[test]
fn it_works() {
assert_eq!(2 + 2, 4);
}
#[test]
#[ignore]
fn expensive_test() {
// code that takes an hour to run
}
The expensive_test function is listed as ignored . If we want to run only the ignored tests, we can use cargo test -- --ignored :
$ cargo test -- --ignored
If you want to run all tests whether they're ignored or not, you can run
$ cargo test -- --include-ignored
Test Organization
As mentioned at the start of the chapter, testing is a complex discipline, and different people use different terminology and organization. The Rust community thinks about tests in terms of two main categories: unit tests and integration tests. Unit tests are small and more focused, testing one module in isolation at a time, and can test private interfaces. Integration tests are entirely external to your library and use your code in the same way any other external code would, using only the public interface and potentially exercising multiple modules per test.
Unit Tests
The purpose of unit tests is to test each unit of code in isolation from the rest of the code to quickly pinpoint where code is and isn't working as expected. You'll put unit tests in the src directory in each file with the code that they're testing. The convention is to create a module named tests in each file to contain the test functions and to annotate the module with cfg(test)
The Tests Module and #[cfg(test)]
The #[cfg(test)] annotation on the tests module tells Rust to compile and run the test code only when you run cargo test , not when you run cargo build . This saves compile time when you only want to build the library and saves space in the resulting compiled artifact because the tests are not included. You'll see that because integration tests go in a different directory, they don't need the #[cfg(test)] annotation. However, because unit tests go in the same files as the code, you'll use #[cfg(test)] to specify that they shouldn't be included in the compiled result.
Integration Tests
In Rust, integration tests are entirely external to your library. They use your library in the same way any other code would, which means they can only call functions that are part of your library's public API.
The tests Directory
We create a tests directory at the top level of our project directory, next to src . Cargo knows to look for integration test files in this directory. We can then make as many test files as we want, and Cargo will compile each of the files as an individual crate.
Each file in the tests directory is a separate crate, so we need to bring our library into each test crate's scope. For that reason we add use adder at the top of the code, which we didn't need in the unit tests.
We don't need to annotate any code in tests/integration_test.rs with #[cfg(test)] . Cargo treats the tests directory specially and compiles files in this directory only when we run cargo test .
Integration Tests for Binary Crates
If our project is a binary crate that only contains a src/main.rs file and doesn't have a src/lib.rs file, we can't create integration tests in the tests directory and bring functions defined in the src/main.rs fileminto scope with a use statement. Only library crates expose functions that other crates can use; binary crates are meant to be run on their own.
This is one of the reasons Rust projects that provide a binary have a straightforward src/main.rs file that calls logic that lives in the src/lib.rs file. Using that structure, integration tests can test the library crate with use to make the important functionality available. If the important functionality works, the small amount of code in the src/main.rs file will work as well, and that small amount of code doesn't need to be tested.
Rust's closures are anonymous functions you can save in a variable or pass as arguments to other functions. You can create the closure in one place and then call the closure elsewhere to evaluate it in a different context. Unlike functions, closures can capture values from the scope in which they're defined.
Closure Type Inference and Annotation
There are more differences between functions and closures. Closures don't usually require you to annotate the types of the parameters or the return value like fn functions do. Type annotations are required on functions because the types are part of an explicit interface exposed to your users. Defining this interface rigidly is important for ensuring that everyone agrees on what types of values a function uses and returns. Closures, on the other hand, aren't used in an exposed interface like this: they're stored in variables and used without naming them and exposing them to users of our library.
Closures are typically short and relevant only within a narrow context rather than in any arbitrary scenario. Within these limited contexts, the compiler can infer the types of the parameters and the return type, similar to how it's able to infer the types of most variables (there are rare cases where the compiler needs closure type annotations too).
let expensive_closure = |num: u32| -> u32 {
println!("calculating slowly...");
thread::sleep(Duration::from_secs(2));
num
};
let example_closure = |x| x;
let s = example_closure(String::from("hello"));
let n = example_closure(5);
The first time we call example_closure with the String value, the compiler infers the type of x and the return type of the closure to be String . Those types are then locked into the closure in example_closure , and we get a type error when we next try to use a different type with the same closure.
Capturing References or Moving Ownership
Closures can capture values from their environment in three ways, which directly map to the three ways a function can take a parameter: borrowing immutably, borrowing mutably, and taking ownership. The closure will decide which of these to use based on what the body of the function does with the captured values.
If you want to force the closure to take ownership of the values it uses in the environment even though the body of the closure doesn't strictly need ownership, you can use the move keyword before the parameter list. This technique is mostly useful when passing a closure to a new thread to move the data so that it's owned by the new thread.
Moving Captured Values Out of Closures and the Fn Traits
Once a closure has captured a reference or captured ownership of a value where the closure is defined (thus affecting what, if anything, is moved into the closure), the code in the body of the closure defines what happens to the references or values when the closure is evaluated later (thus affecting what, if anything, is moved out of the closure). A closure body can do any of the following: move a captured value out of the closure, mutate the captured value, neither move nor mutate the value, or capture nothing from the environment to begin with.
The way a closure captures and handles values from the environment affects which traits the closure implements, and traits are how functions and structs can specify what kinds of closures they can use. Closures will automatically implement one, two, or all three of these Fn traits, in an additive fashion:
FnOnce applies to closures that can be called at least once. All closures implement at least this trait, because all closures can be called. A closure that moves captured values out of its body will only implement FnOnce and none of the other Fn traits, because it can only be called once.
FnMut applies to closures that don't move captured values out of their body, but that might mutate the captured values. These closures can be called more than once.
Fn applies to closures that don't move captured values out of their body and that don't mutate captured values, as well as closures that capture nothing from their environment. These closures can be called more than once without mutating their environment, which is important in cases such as calling a closure multiple times concurrently.
The Iterator Trait and the next Method
The iter method produces an iterator over immutable references. If we want to create an iterator that takes ownership of value and returns owned values, we can call into _iter instead of iter . Similarly, if we want to iterate over mutable references, we can call iter_mut instead of iter .
Methods that Consume the Iterator
Methods that call next are called consuming adaptors , because calling them uses up the iterator. One example is the sum method, which takes ownership of the iterator and iterates through the items by repeatedly calling next , thus consuming the iterator. As it iterates through, it adds each item to a running total and returns the total when iteration is complete.
fn iterator_sum() {
let v1 = vec![1, 2, 3];
let v1_iter = v1.iter();
let total: i32 = v1_iter.sum();
assert_eq!(total, 6);
}
We aren't allowed to use v1_iter after the call to sum because sum takes ownership of the iterator we call it on.
Methods that Produce Other Iterators
Iterator adaptors are methods defined on the Iterator trait that don't consume the iterator. Instead, they produce different iterators by changing some aspect of the original iterator.
Commonly Used Sections
We used the # Examples Markdown heading
• Panics: The scenarios in which the function being documented could panic. Callers of the function who don't want their programs to panic should make sure they don't call the function in these situations.
• Errors: If the function returns a Result , describing the kinds of errors that might occur and what conditions might cause those errors to be returned can be helpful to callers so they can write code to handle the different kinds of errors in different ways.
• Safety: If the function is unsafe to call, there should be a section explaining why the function is unsafe and covering the invariants that the function expects callers to uphold.
Commenting Contained Items
The style of doc comment //! adds documentation to the item that contains the comments rather than to the items following the comments. We typically use these doc comments inside the crate root file (src/lib.rs by convention) or inside a module to document the crate or the module as a whole.
For example, to add documentation that describes the purpose of the my_crate crate that contains the add_one function, we add documentation comments that start with //! to the beginning of the
//! # My Crate
//!
//! `my_crate` is a collection of utilities to make performing certain
//! calculations more convenient.
/// Adds one to the number given.
// --snip--
Notice there isn't any code after the last line that begins with //! 。 Because we started the comments with //! instead of /// , we're documenting the item that contains this comment rather than an item that follows this comment. In this case, that item is the src/lib.rs file, which is the crate root. These comments describe the entire crate.
When we run cargo doc --open , these comments will display on the front page of the documentation for my_crate above the list of public items in the crate,
Smart pointers are usually implemented using structs. Unlike an ordinary struct, smart pointers implement the Deref and Drop traits. The Deref trait allows an instance of the smart pointer struct to behave like a reference so you can write your code to work with either references or smart pointers. The Drop trait allows you to customize the code that's run when an instance of the smart pointer goes out of scope.
• Box<T> for allocating values on the heap
• Rc<T> , a reference counting type that enables multiple ownership
• Ref<T> and RefMut<T> , accessed through RefCell<T> , a type that enforces the borrowing rules at runtime instead of compile time
Using Box to Point to Data on the Heap
The most straightforward smart pointer is a box, whose type is written Box<T> . Boxes allow you to store data on the heap rather than the stack. What remains on the stack is the pointer to the heap data.
Boxes don't have performance overhead, other than storing their data on the heap instead of on the stack. But they don't have many extra capabilities either. You'll use them most often in these situations:
• When you have a type whose size can't be known at compile time and you want to use a value of that type in a context that requires an exact size • When you have a large amount of data and you want to transfer ownership but ensure the data won't be copied when you do so • When you want to own a value and you care only that it's a type that implements a particular trait rather than being of a specific type
The Box<T> type is a smart pointer because it implements the Deref trait, which allows Box<T> values to be treated like references. When a Box<T> value goes out of scope, the heap data that the box is pointing to is cleaned up as well because of the Drop trait implementation.
Treating Smart Pointers Like Regular References with the Deref Trait
Implementing the Deref trait allows you to customize the behavior of the dereference operator * (not to be confused with the multiplication or glob operator). By implementing Deref in such a way that a smart pointer can be treated like a regular reference, you can write code that operates on references and use that code with smart pointers too.
Implicit Deref Coercions with Functions and Methods
Deref coercion converts a reference to a type that implements the Deref trait into a reference to another type.
Deref coercion is a convenience Rust performs on arguments to functions and methods, and works only on types that implement the Dere f trait. It happens automatically when we pass a reference to a particular type's value as an argument to a function or method that doesn't match the parameter type in the function or method definition. A sequence of calls to the deref method converts the type we provided into the type the parameter needs.
Deref coercion was added to Rust so that programmers writing function and method calls don't need to add as many explicit references and dereferences with & and * . The deref coercion feature also lets us write more code that can work for either references or smart pointers.
When the Deref trait is defined for the types involved, Rust will analyze the types and use Deref::deref as many times as necessary to get a reference to match the parameter's type. The number of times that Deref::deref needs to be inserted is resolved at compile time, so there is no runtime penalty for taking advantage of deref coercion!
How Deref Coercion Interacts with Mutability
Similar to how you use the Deref trait to override the * operator on immutable references, you can use the DerefMut trait to override the * operator on mutable references.
Rust does deref coercion when it finds types and trait implementations in three cases:
• From &T to &U when T: Deref<Target=U>
• From &mut T to &mut U when T: DerefMut<Target=U>
• From &mut T to &U when T: Deref<Target=U>
The first two cases are the same as each other except that the second implements mutability. The first case states that if you have a &T , and T implements Deref to some type U , you can get a &U transparently. The second case states that the same deref coercion happens for mutable references.
The third case is trickier: Rust will also coerce a mutable reference to an immutable one. But the reverse is not possible: immutable references will never coerce to mutable references. Because of the borrowing rules, if you have a mutable reference, that mutable reference must be the only reference to that data (otherwise, the program wouldn't compile). Converting one mutable reference to one immutable reference will never break the borrowing rules. Converting an immutable reference to a mutable reference would require that the initial immutable reference is the only immutable reference to that data, but the borrowing rules don't guarantee that. Therefore, Rust can't make the assumption that converting an immutable reference to a mutable reference is possible.
Running Code on Cleanup with the Drop Trait
Drop , which lets you customize what happens when a value is about to go out of scope. You can provide an implementation for the Drop trait on any type, and that code can be used to release resources like files or network connections.
You specify the code to run when a value goes out of scope by implementing the Drop trait. The Drop trait requires you to implement one method named drop that takes a mutable reference to self .
Rust automatically called drop for us when our instances went out of scope, calling the code we specified. Variables are dropped in the reverse order of their creation, so d was dropped before c .
Dropping a Value Early with std::mem::drop
Unfortunately, it's not straightforward to disable the automatic drop functionality. Disabling drop isn't usually necessary; the whole point of the Drop trait is that it's taken care of automatically. Occasionally, however, you might want to clean up a value early. One example is when using smart pointers that manage locks: you might want to force the drop method that releases the lock so that other code in the same scope can acquire the lock. Rust doesn't let you call the Drop trait's drop method manually; instead you have to call the std::mem::drop function provided by the standard library if you want to force a value to be dropped before the end of its scope.
A destructor is analogous to a constructor, which creates an instance. The drop function in Rust is one particular destructor.
Rust doesn't let us call drop explicitly because Rust would still automatically call drop on the value at the end of main . This would cause a double free error because Rust would be trying to clean up the same value twice.
The std::mem::drop function is different from the drop method in the Drop trait. We call it by passing as an argument the value we want to force drop. The function is in the prelude, so we can modify main
fn main() {
let c = CustomSmartPointer {
data: String::from("some data"),
};
println!("CustomSmartPointer created.");
drop(c);
println!("CustomSmartPointer dropped before the end of main.");
}
Rc<T> , the Reference Counted Smart Pointer
You have to enable multiple ownership explicitly by using the Rust type Rc<T> , which is an abbreviation for reference counting. The Rc<T> type keeps track of the number of references to a value to determine whether or not the value is still in use. If there are zero references to a value, the value can be cleaned up without any references becoming invalid.
We use the Rc<T> type when we want to allocate some data on the heap for multiple parts of our program to read and we can't determine at compile time which part will finish using the data last. If we knew which part would finish last, we could just make that part the data's owner, and the normal ownership rules enforced at compile time would take effect.
RefCell<T> and the Interior Mutability Pattern
Interior mutability is a design pattern in Rust that allows you to mutate data even when there are immutable references to that data; normally, this action is disallowed by the borrowing rules. To mutate data, the pattern uses unsafe code inside a data structure to bend Rust's usual rules that govern mutation and borrowing. Unsafe code indicates to the compiler that we're checking the rules manually instead of relying on the compiler to check them for us;
We can use types that use the interior mutability pattern only when we can ensure that the borrowing rules will be followed at runtime, even though the compiler can't guarantee that. The unsafe code involved is then wrapped in a safe API, and the outer type is still immutable.
Enforcing Borrowing Rules at Runtime with RefCell<T>
Unlike Rc<T> , the RefCell<T> type represents single ownership over the data it holds.
With references and Box<T> , the borrowing rules' invariants are enforced at compile time. With RefCell<T> , these invariants are enforced at runtime. With references, if you break these rules, you'll get a compiler error. With RefCell<T> , if you break these rules, your program will panic and exit.
Here is a recap of the reasons to choose Box<T> , Rc<T> , or RefCell<T > :
• Rc<T> enables multiple owners of the same data; Box<T> and RefCell<T> have single owners.
• Box<T> allows immutable or mutable borrows checked at compile time; Rc<T> allows only immutable borrows checked at compile time; RefCell<T> allows immutable or mutable borrows checked at runtime.
• Because RefCell<T> allows mutable borrows checked at runtime, you can mutate the value inside the RefCell<T> even when the RefCell<T> is immutable.
Mutating the value inside an immutable value is the interior mutability pattern.
Keeping Track of Borrows at Runtime with RefCell<T>
When creating immutable and mutable references, we use the & and &mut syntax, respectively. With RefCell<T> , we use the borrow and borrow_mut methods, which are part of the safe API that belongs to RefCell<T> . The borrow method returns the smart pointer type Ref<T> , and borrow_mut returns the smart pointer type RefMut<T> . Both types implement Deref , so we can treat them like regular references.
The RefCell<T> keeps track of how many Ref<T> and RefMut<T> smart pointers are currently active. Every time we call borrow , the RefCell<T> increases its count of how many immutable borrows are active. When a Ref<T> value goes out of scope, the count of immutable borrows goes down by one. Just like the compile-time borrowing rules, RefCell<T> lets us have many immutable borrows or one mutable borrow at any point in time.
Reference Cycles Can Leak Memory
Rust's memory safety guarantees make it difficult, but not impossible, to accidentally create memory that is never cleaned up (known as a memory leak). Preventing memory leaks entirely is not one of Rust's guarantees, meaning memory leaks are memory safe in Rust. We can see that Rust allows memory leaks by using Rc<T> and RefCell<T> : it's possible to create references where items refer to each other in a cycle. This creates memory leaks because the reference count of each item in the cycle will never reach 0 , and the values will never be dropped.
Characteristics of Object-Oriented Languages
There is no consensus in the programming community about what features a language must have to be considered object-oriented. Rust is influenced by many programming paradigms, including OOP. Arguably, OOP languages share certain common characteristics, namely objects, encapsulation, and inheritance. Let's look at what each of those characteristics means and whether Rust supports it.
Encapsulation that Hides Implementation Details
Another aspect commonly associated with OOP is the idea of encapsulation, which means that the implementation details of an object aren't accessible to code using that object. Therefore, the only way to interact with an object is through its public API; code using the object shouldn't be able to reach into the object's internals and change data or behavior directly. This enables the programmer to change and refactor an object's internals without needing to change the code that uses the object.
we can use the pub keyword to decide which modules, types, functions, and methods in our code should be public, and by default everything else is private. For example, we can define a struct Averaged Collection that has a field
containing a vector of i32 values. The struct can also have a field that contains the average of the values in the vector, meaning the average doesn't have to be computed on demand whenever anyone needs it. In other words, AveragedCollection will cache the calculated average for us.
pub struct AveragedCollection {
list: Vec<i32>,
average: f64,
}
The struct is marked pub so that other code can use it, but the fields within the struct remain private. This is important in this case because we want to ensure that whenever a value is added or removed from the list, the average is also updated. We do this by implementing add , remove , and average methods on the struct,
If encapsulation is a required aspect for a language to be considered object-oriented, then Rust meets that requirement. The option to use pub or not for different parts of code enables encapsulation of implementation details.
Inheritance as a Type System and as Code Sharing
Inheritance is a mechanism whereby an object can inherit elements from another object's definition, thus gaining the parent object's data and behavior without you having to define them again.
If a language must have inheritance to be an object-oriented language, then Rust is not one. There is no way to define a struct that inherits the parent struct's fields and method implementations without using a macro.
However, if you're used to having inheritance in your programming toolbox, you can use other solutions in Rust, depending on your reason for reaching for inheritance in the first place.
You would choose inheritance for two main reasons. One is for reuse of code: you can implement particular behavior for one type, and inheritance enables you to reuse that implementation for a different type. You can do this in a limited way in Rust code using default trait method implementations,
The other reason to use inheritance relates to the type system: to enable a child type to be used in the same places as the parent type. This is also called polymorphism , which means that you can substitute multiple objects for each other at runtime if they share certain characteristics.
Polymorphism
To many people, polymorphism is synonymous with inheritance. But it's actually a more general concept that refers to code that can work with data of multiple types. For inheritance, those types are generally subclasses. Rust instead uses generics to abstract over different possible types and trait bounds to impose constraints on what those types must provide. This is sometimes called bounded parametric polymorphism.
Conditional if let Expressions
if let expressions mainly as a shorter way to write the equivalent of a match that only matches one case. Optionally, if let can have a corresponding else containing code to run if the pattern in the if let doesn't match.
it's also possible to mix and match if let , else if , and else if let expressions. Doing so gives us more flexibility than a match expression in which we can express only one value to compare with the patterns. Also, Rust doesn't require that the conditions in a series of if let , else if , else if let arms relate to each other.
fn main() {
let favorite_color: Option<&str> = None;
let is_tuesday = false;
let age: Result<u8, _> = "34".parse();
if let Some(color) = favorite_color {
println!("Using your favorite color, {color}, as the background");
} else if is_tuesday {
println!("Tuesday is green day!");
} else if let Ok(age) = age {
if age > 30 {
println!("Using purple as the background color");
} else {
println!("Using orange as the background color");
}
} else {
println!("Using blue as the background color");
}
}
while let Conditional Loops
Similar in construction to if let , the while let conditional loop allows a while loop to run for as long as a pattern continues to match. A while let loop that uses a vector as a stack and prints the values in the vector in the opposite order in which they were pushed.
let mut stack = Vec::new();
stack.push(1);
stack.push(2);
stack.push(3);
while let Some(top) = stack.pop() {
println!("{}", top);
}
let Statements
Prior to this chapter, we had only explicitly discussed using patterns with match and if let , but in fact, we've used patterns in other places as well, including in let statements. For example, consider this straightforward variable assignment with let :
let x = 5;
Every time you've used a let statement like this you've been using patterns, although you might not have realized it! More formally, a let statement looks like this:
let PATTERN = EXPRESSION;
Refutability: Whether a Pattern Might Fail to Match
Patterns come in two forms: refutable and irrefutable. Patterns that will match for any possible value passed are irrefutable. An example would be x in the statement let x = 5; because x matches anything and therefore cannot fail to match. Patterns that can fail to match for some possible value are refutable. An example would be Some(x) in the expression if let Some(x) = a_value because if the value in the a_value variable is None rather than Some , the Some(x) pattern will not match.
Function parameters, let statements, and for loops can only accept irrefutable patterns, because the program cannot do anything meaningful when values don't match. The if let and while let expressions accept refutable and irrefutable patterns, but the compiler warns against irrefutable patterns because by definition they're intended to handle possible failure: the functionality of a conditional is in its ability to perform differently depending on success or failure.
Multiple Patterns
In match expressions, you can match multiple patterns using the | syntax, which is the pattern or operator. For example, in the following code we match the value of x against the match arms, the first of which has an or option, meaning if the value of x matches either of the values in that arm, that arm's code will run:
let x = 1;
match x {
1 | 2 => println!("one or two"),
3 => println!("three"),
_ => println!("anything"),
}
This code prints one or two .
Matching Ranges of Values with ..=
The ..= syntax allows us to match to an inclusive range of values. In the following code, when a pattern matches any of the values within the given range, that arm will execute:
let x = 5;
match x {
1..=5 => println!("one through five"),
_ => println!("something else"),
}
If x is 1, 2, 3, 4, or 5, the first arm will match. This syntax is more convenient for multiple match values than using the | operator to express the same idea; if we were to use | we would have to specify 1 | 2 | 3 | 4 | 5。 Specifying a range is much shorter, especially if we want
Destructuring Structs
struct Point {
x: i32,
y: i32,
}
fn main() {
let p = Point { x: 0, y: 7 };
let Point { x: a, y: b } = p;
assert_eq!(0, a);
assert_eq!(7, b);
}
- - - または - - -
struct Point {
x: i32,
y: i32,
}
fn main() {
let p = Point { x: 0, y: 7 };
let Point { x, y } = p;
assert_eq!(0, x);
assert_eq!(7, y);
}
We can also destructure with literal values as part of the struct pattern rather than creating variables for all the fields. Doing so allows us to test some of the fields for particular values while creating variables to destructure the other fields. we have a match expression that separates Point values into three cases: points that lie directly on the x axis (which is true when y = 0 ), on the y axis ( x = 0 ), or neither.
fn main() {
let p = Point { x: 0, y: 7 };
match p {
Point { x, y: 0 } => println!("On the x axis at {}", x),
Point { x: 0, y } => println!("On the y axis at {}", y),
Point { x, y } => println!("On neither axis: ({}, {})", x, y),
}
}
The first arm will match any point that lies on the x axis by specifying that the y field matches if its value matches the literal 0 . The pattern still creates an x variable that we can use in the code for this arm.
Similarly, the second arm matches any point on the y axis by specifying that the x field matches if its value is 0 and creates a variable y for the value of the y field. The third arm doesn't specify any literals, so it matches any other Point and creates variables for both the x and y fields.
In this example, the value p matches the second arm by virtue of x containing a 0, so this code will print On the y axis at 7 . Remember that a match expression stops checking arms once it has found the first matching pattern, so even though Point { x: 0, y: 0} is on the x axis and the y axis, this code would only print On the x axis at 0 .
Ignoring Remaining Parts of a Value with ..
With values that have many parts, we can use the .. syntax to use specific parts and ignore the rest, avoiding the need to list underscores for each ignored value. The .. pattern ignores any parts of a value that we haven't explicitly matched in the rest of the pattern. we have a Point struct that holds a coordinate in three-dimensional space. In the match expression, we want to operate only on the x coordinate and ignore the values in the y and z fields.
struct Point {
x: i32,
y: i32,
z: i32,
}
let origin = Point { x: 0, y: 0, z: 0 };
match origin {
Point { x, .. } => println!("x is {}", x),
}
We list the x value and then just include the .. pattern. This is quicker than having to list y: _ and z: _ , particularly when we're working with structs that have lots of fields in situations where only one or two fields are relevant.
Extra Conditionals with Match Guards
A match guard is an additional if condition, specified after the pattern in a match arm, that must also match for that arm to be chosen. Match guards are useful for expressing more complex ideas than a pattern alone allows.
The condition can use variables created in the pattern. shows a match where the first arm has the pattern Some(x) and also has a match guard of if x % 2 == 0 (which will be true if the number is even).
let num = Some(4);
match num {
Some(x) if x % 2 == 0 => println!("The number {} is even", x),
Some(x) => println!("The number {} is odd", x),
None => (),
}
This example will print The number 4 is even . When num is compared to the pattern in the first arm, it matches, because Some(4) matches Some(x) . Then the match guard checks whether the remainder of dividing x by 2 is equal to 0, and because it is, the first arm is selected.
If num had been Some(5) instead, the match guard in the first arm would have been false because the remainder of 5 divided by 2 is 1, which is not equal to 0. Rust would then go to the second arm, which would match because the second arm doesn't have a match guard and therefore matches any Some variant. There is no way to express the if x % 2 == 0 condition within a pattern, so the match guard gives us the ability to express this logic. The downside of this additional expressiveness is that the compiler doesn't try to check for exhaustiveness when match guard expressions are involved.
fn main() {
let x = Some(5);
let y = 10;
match x {
Some(50) => println!("Got 50"),
Some(n) if n == y => println!("Matched, n = {n}"),
_ => println!("Default case, x = {:?}", x),
}
println!("at the end: x = {:?}, y = {y}", x);
}
This code will now print Default case, x = Some(5) . The pattern in the second match arm doesn't introduce a new variable y that would shadow the outer y , meaning we can use the outer y in the match guard. Instead of specifying the pattern as Some(y) , which would have shadowed the outer y , we specify Some(n) . This creates a new variable n that doesn't shadow anything because there is no n variable outside the match .
@ Bindings
The at operator @ lets us create a variable that holds a value at the same time as we're testing that value for a pattern match. We want to test that a Message::Hello id field is within the range 3..=7 . We also want to bind the value to the variable id_variable so we can use it in the code associated with the arm. We could name this variable id , the same as the field, but for this example we'll use a different name.
enum Message {
Hello { id: i32 },
}
let msg = Message::Hello { id: 5 };
match msg {
Message::Hello {
id: id_variable @ 3..=7,
} => println!("Found an id in range: {}", id_variable),
Message::Hello { id: 10..=12 } => {
println!("Found an id in another range")
}
Message::Hello { id } => println!("Found some other id: {}", id),
}
This example will print Found an id in range: 5 . By specifying id_variable @ before the range 3..=7 , we're capturing whatever value matched the range while also testing that the value matched the range pattern.
In the second arm, where we only have a range specified in the pattern, the code associated with the arm doesn't have a variable that contains the actual value of the id field. The id field's value could have been 10, 11, or 12, but the code that goes with that pattern doesn't know which it is. The pattern code isn't able to use the value from the id field, because we haven't saved the id value in a variable.
Using @ lets us test a value and save it in a variable within one pattern.
To switch to unsafe Rust, use the unsafe keyword and then start a new block that holds the unsafe code. You can take five actions in unsafe Rust that you can't in safe Rust, which we call unsafe superpowers. Those superpowers include the ability to:
• Dereference a raw pointer • Call an unsafe function or method • Access or modify a mutable static variable • Implement an unsafe trait • Access fields of union s
It's important to understand that unsafe doesn't turn off the borrow checker or disable any other of Rust's safety checks: if you use a reference in unsafe code, it will still be checked. The unsafe keyword only gives you access to these five features that are then not checked by the compiler for memory safety. You'll still get some degree of safety inside of an unsafe block.
In addition, unsafe does not mean the code inside the block is necessarily dangerous or that it will definitely have memory safety problems: the intent is that as the programmer, you'll ensure the code inside an unsafe block will access memory in a valid way.
To isolate unsafe code as much as possible, it's best to enclose unsafe code within a safe abstraction and provide a safe API, which we'll discuss later in the chapter when we examine unsafe functions and methods. Parts of the standard library are implemented as safe abstractions over unsafe code that has been audited. Wrapping unsafe code in a safe abstraction prevents uses of unsafe from leaking out into all the places that you or your users might want to use the functionality implemented with unsafe code, because using a safe abstraction is safe.
Dereferencing a Raw Pointer
Unsafe Rust has two new types called raw pointers that are similar to references. As with references, raw pointers can be immutable or mutable and are written as *const T and *mut T , respectively. The asterisk isn't the dereference operator; it's part of the type name. In the context of raw pointers, immutable means that the pointer can't be directly assigned to after being dereferenced.
Different from references and smart pointers, raw pointers: • Are allowed to ignore the borrowing rules by having both immutable and mutable pointers or multiple mutable pointers to the same location • Aren't guaranteed to point to valid memory • Are allowed to be null • Don't implement any automatic cleanup
By opting out of having Rust enforce these guarantees, you can give up guaranteed safety in exchange for greater performance or the ability to interface with another language or hardware where Rust's guarantees don't apply.
let mut num = 5;
let r1 = &num as *const i32; // this is pointer location of num
let r2 = &mut num as *mut i32;
Notice that we don't include the unsafe keyword in this code. We can create raw pointers in safe code; we just can't dereference raw pointers outside an unsafe block, as you'll see in a bit.
Recall that we can create raw pointers in safe code, but we can't dereference raw pointers and read the data being pointed to. We use the dereference operator * on a raw pointer that requires an unsafe block.
unsafe {
println!("r1 is: {}", *r1);
println!("r2 is: {}", *r2);
}
With raw pointers, we can create a mutable pointer and an immutable pointer to the same location and change data through the mutable pointer, potentially creating a data race.気をつけて!
Calling an Unsafe Function or Method
The second type of operation you can perform in an unsafe block is calling unsafe functions. Unsafe functions and methods look exactly like regular functions and methods, but they have an extra unsafe before the rest of the definition.
unsafe fn dangerous() {}
unsafe {
dangerous();
}
Creating a Safe Abstraction over Unsafe Code
Just because a function contains unsafe code doesn't mean we need to mark the entire function as unsafe. In fact, wrapping unsafe code in a safe function is a common abstraction.
fn split_at_mut(values: &mut [i32], mid: usize) -> (&mut [i32], &mut [i32]) {
let len = values.len();
assert!(mid <= len);
(&mut values[..mid], &mut values[mid..])
}
We can't implement this function using only safe Rust.For simplicity, we'll implement split_at_mut as a function rather than a method and only for slices of i32 values rather than for a generic type T
Rust's borrow checker can't understand that we're borrowing different parts of the slice; it only knows that we're borrowing from the same slice twice. Borrowing different parts of a slice is fundamentally okay because the two slices aren't overlapping, but Rust isn't smart enough to know this. When we know code is okay, but Rust doesn't, it's time to reach for unsafe code.
use std::slice;
fn split_at_mut(values: &mut [i32], mid: usize) -> (&mut [i32], &mut [i32]) {
let len = values.len();
let ptr = values.as_mut_ptr();
assert!(mid <= len);
unsafe {
(
slice::from_raw_parts_mut(ptr, mid),
slice::from_raw_parts_mut(ptr.add(mid), len - mid),
)
}
}
Note that we don't need to mark the resulting split_at_mut function as unsafe , and we can call this function from safe Rust. We've created a safe abstraction to the unsafe code with an implementation of the function that uses unsafe code in a safe way, because it creates only valid pointers from the data this function has access to.
Using extern Functions to Call External Code
Sometimes, your Rust code might need to interact with code written in another language. For this, Rust has the keyword extern that facilitates the creation and use of a Foreign Function Interface (FFI) . An FFI is a way for a programming language to define functions and enable a different (foreign) programming language to call those functions.
Functions declared within extern blocks are always unsafe to call from Rust code. The reason is that other languages don't enforce Rust's rules and guarantees, and Rust can't check them, so responsibility falls on the programmer to ensure safety.
extern "C" {
fn abs(input: i32) -> i32;
}
fn main() {
unsafe {
println!("Absolute value of -3 according to C: {}", abs(-3));
}
}
Within the extern "C" block, we list the names and signatures of external functions from another language we want to call. The "C" part defines which application binary interface (ABI) the external function uses: the ABI defines how to call the function at the assembly level. The "C" ABI is the most common and follows the C programming language's ABI.
Calling Rust Functions from Other Languages
We can also use extern to create an interface that allows other languages to call Rust functions. Instead of an creating a whole extern block, we add the extern keyword and specify the ABI to use just before the fn keyword for the relevant function. We also need to add a #[no_mangle] annotation to tell the Rust compiler not to mangle the name of this function. Mangling is when a compiler changes the name we've given a function to a different name that contains more information for other parts of the compilation process to consume but is less human readable. Every programming language compiler mangles names slightly differently, so for a Rust function to be nameable by other languages, we must disable the Rust compiler's name mangling.
In the following example, we make the call_from_c function accessible from C code, after it's compiled to a shared library and linked from C:
#[no_mangle]
pub extern "C" fn call_from_c() {
println!("Just called a Rust function from C!");
}
This usage of extern does not require unsafe .
Accessing or Modifying a Mutable Static Variable
In this book, we've not yet talked about global variables, which Rust does support but can be problematic with Rust's ownership rules. If two threads are accessing the same mutable global variable, it can cause a data race. In Rust, global variables are called static variables.
static HELLO_WORLD: &str = "Hello, world!";
fn main() {
println!("name is: {}", HELLO_WORLD);
}
Static variables are similar to constants, The names of static variables are in SCREAMING_SNAKE_CASE by convention. Static variables can only store references with the 'static lifetime, which means the Rust compiler can figure out the lifetime and we aren't required to annotate it explicitly. Accessing an immutable static variable is safe.
A subtle difference between constants and immutable static variables is that values in a static variable have a fixed address in memory. Using the value will always access the same data. Constants, on the other hand, are allowed to duplicate their data whenever they're used. Another difference is that static variables can be mutable. Accessing and modifying mutable static variables is unsafe.
static mut COUNTER: u32 = 0;
fn add_to_count(inc: u32) {
unsafe {
COUNTER += inc;
}
}
fn main() {
add_to_count(3);
unsafe {
println!("COUNTER: {}", COUNTER);
}
}
With mutable data that is globally accessible, it's difficult to ensure there are no data races, which is why Rust considers mutable static variables to be unsafe. Where possible, it's preferable to use the concurrency techniques and thread-safe smart pointers we discusse later so the compiler checks that data accessed from different threads is done safely.
Implementing an Unsafe Trait
We can use unsafe to implement an unsafe trait . A trait is unsafe when at least one of its methods has some invariant that the compiler can't verify. We declare that a trait is unsafe by adding the unsafe keyword before trait and marking the implementation of the trait as unsafe too.
unsafe trait Foo {
// methods go here
}
unsafe impl Foo for i32 {
// method implementations go here
}
Accessing Fields of a Union
The final action that works only with unsafe is accessing fields of a union . A union is similar to a struct , but only one declared field is used in a particular instance at one time. Unions are primarily used to interface with unions in C code. Accessing union fields is unsafe because Rust can't guarantee the type of the data currently being stored in the union instance.
Specifying Placeholder Types in Trait Definitions with Associated Types
Associated types connect a type placeholder with a trait such that the trait method definitions can use these placeholder types in their signatures. The implementor of a trait will specify the concrete type to be used instead of the placeholder type for the particular implementation. That way, we can define a trait that uses some types without needing to know exactly what those types are until the trait is implemented.
pub trait Iterator {
type Item;
fn next(&mut self) -> Option<Self::Item>;
}
Associated types might seem like a similar concept to generics, in that the latter allow us to define a function without specifying what types it can handle. To examine the difference between the two concepts, we'll look at an implementation of the Iterator trait on a type named Counter that specifies the Item type is u32 :
impl Iterator for Counter {
type Item = u32;
fn next(&mut self) -> Option<Self::Item> {
// --snip-
This syntax seems comparable to that of generics. So why not just define the Iterator trait with generics,
The difference is that when using generics, we must annotate the types in each implementation; because we can also implement Iterator<String> for Counter or any other type, we could have multiple implementations of Iterator for Counter . In other words, when a trait has a generic parameter, it can be implemented for a type multiple times, changing the concrete types of the generic type parameters each time. When we use the next method on Counter , we would have to provide type annotations to indicate which implementation of Iterator we want to use.
With associated types, we don't need to annotate types because we can't implement a trait on a type multiple times. with the definition that uses associated types, we can only choose what the type of Item will be once, because there can only be one impl Iterator for Counter . We don't have to specify that we want an iterator of u32 values everywhere that we call next on Counter .
Associated types also become part of the trait's contract: implementors of the trait must provide a type to stand in for the associated type placeholder. Associated types often have a name that describes how the type will be used, and documenting the associated type in the API documentation is good practice.
Default Generic Type Parameters and Operator Overloading
When we use generic type parameters, we can specify a default concrete type for the generic type. This eliminates the need for implementors of the trait to specify a concrete type if the default type works. You specify a default type when declaring a generic type with the <PlaceholderType=ConcreteType> syntax.
A great example of a situation where this technique is useful is with operator overloading, in which you customize the behavior of an operator (such as + ) in particular situations.
Rust doesn't allow you to create your own operators or overload arbitrary operators. But you can overload the operations and corresponding traits listed in std::ops by implementing the traits associated with the operator. For example, we overload the + operator to add two Point instances together. We do this by implementing the Add trait on a Point struct:
use std::ops::Add;
#[derive(Debug, Copy, Clone, PartialEq)]
struct Point {
x: i32,
y: i32,
}
impl Add for Point {
type Output = Point;
fn add(self, other: Point) -> Point {
Point {
x: self.x + other.x,
y: self.y + other.y,
}
}
}
fn main() {
assert_eq!(
Point { x: 1, y: 0 } + Point { x: 2, y: 3 },
Point { x: 3, y: 3 }
);
}
trait Add<Rhs=Self> {
type Output;
fn add(self, rhs: Rhs) -> Self::Output;
}
This code should look generally familiar: a trait with one method and an associated type. The new part is Rhs=Self : this syntax is called default type parameters. The Rhs generic type parameter (short for “right hand side”) defines the type of the rhs parameter in the add method. If we don't specify a concrete type for Rhs when we implement the Add trait, the type of Rhs will default to Self , which will be the type we're implementing Add on.
When we implemented Add for Point , we used the default for Rhs because we wanted to add two Point instances.
Let's look at an example of implementing the Add trait where we want to customize the Rhs type rather than using the default.
We have two structs, Millimeters and Meters , holding values in different units. This thin wrapping of an existing type in another struct is known as the newtype pattern,
use std::ops::Add;
struct Millimeters(u32);
struct Meters(u32);
impl Add<Meters> for Millimeters {
type Output = Millimeters;
fn add(self, other: Meters) -> Millimeters {
Millimeters(self.0 + (other.0 * 1000))
}
}
To add Millimeters and Meters , we specify impl Add<Meters> to set the value of the Rhs type parameter instead of using the default of Self .
You'll use default type parameters in two main ways: • To extend a type without breaking existing code • To allow customization in specific cases most users won't need
Fully Qualified Syntax for Disambiguation: Calling Methods with the Same Name
Nothing in Rust prevents a trait from having a method with the same name as another trait's method, nor does Rust prevent you from implementing both traits on one type. It's also possible to implement a method directly on the type with the same name as methods from traits.
trait Pilot {
fn fly(&self);
}
trait Wizard {
fn fly(&self);
}
struct Human;
impl Pilot for Human {
fn fly(&self) {
println!("This is your captain speaking.");
}
}
impl Wizard for Human {
fn fly(&self) {
println!("Up!");
}
}
impl Human {
fn fly(&self) {
println!("*waving arms furiously*");
}
}
When we call fly on an instance of Human , the compiler defaults to calling the method that is directly implemented on the type,
fn main() {
let person = Human;
person.fly();
}
To call the fly methods from either the Pilot trait or the Wizard trait, we need to use more explicit syntax to specify which fly method we mean.
fn main() {
let person = Human;
Pilot::fly(&person);
Wizard::fly(&person);
person.fly();
}
Specifying the trait name before the method name clarifies to Rust which implementation of fly we want to call. We could also write Human::fly(&person) , which is equivalent to the person.fly() , but this is a bit longer to write if we don't need to disambiguate.
However, associated functions that are not methods don't have a self parameter. When there are multiple types or traits that define non-method functions with the same function name, Rust doesn't always know which type you mean unless you use fully qualified syntax.
trait Animal {
fn baby_name() -> String;
}
struct Dog;
impl Dog {
fn baby_name() -> String {
String::from("Spot")
}
}
impl Animal for Dog {
fn baby_name() -> String {
String::from("puppy")
}
}
fn main() {
println!("A baby dog is called a {}", Dog::baby_name());
}
This output isn't what we wanted. We want to call the baby_name function that is part of the Animal trait that we implemented on Dog so the code prints A baby dog is called a puppy . The technique of specifying the trait name that we used in example doesn't help here; if we change main to the code , we'll get a compilation error.
fn main() {
println!("A baby dog is called a {}", Animal::baby_name());
}
To disambiguate and tell Rust that we want to use the implementation of Animal for Dog as opposed to the implementation of Animal for some other type, we need to use fully qualified syntax.
fn main() {
println!("A baby dog is called a {}", <Dog as Animal>::baby_name());
}
implemented on Dog We're providing Rust with a type annotation within the angle brackets, which indicates we want to call the baby_name method from the Animal trait as implemented on Dog by saying that we want to treat the Dog type as an Animal for this function call. This code will now print what we want:
In general, fully qualified syntax is defined as follows:
<Type as Trait>::function(receiver_if_method, next_arg, ...);
For associated functions that aren't methods, there would not be a receiver : there would only be the list of other arguments. You could use fully qualified syntax everywhere that you call functions or methods. However, you're allowed to omit any part of this syntax that Rust can figure out from other information in the program. You only need to use this more verbose syntax in cases where there are multiple implementations that use the same name and Rust needs help to identify which implementation you want to call
Using Supertraits to Require One Trait's Functionality Within Another Trait
Sometimes, you might write a trait definition that depends on another trait: for a type to implement the first trait, you want to require that type to also implement the second trait. You would do this so that your trait definition can make use of the associated items of the second trait. The trait your trait definition is relying on is called a supertrait of your trait.
use std::fmt;
trait OutlinePrint: fmt::Display {
fn outline_print(&self) {
let output = self.to_string();
let len = output.len();
println!("{}", "*".repeat(len + 4));
println!("*{}*", " ".repeat(len + 2));
println!("* {} *", output);
println!("*{}*", " ".repeat(len + 2));
println!("{}", "*".repeat(len + 4));
}
}
Because we've specified that OutlinePrint requires the Display trait, we can use the to_string unction that is automatically implemented for any type that implements Display . If we tried to use to_string without adding a colon and specifying the Display trait after the trait name, we'd get an error saying that no method named to_string was found for the type &Self in the current scope.
Let's see what happens when we try to implement OutlinePrint on a type that doesn't implement Display , such as the Point struct:
struct Point {
x: i32,
y: i32,
}
impl OutlinePrint for Point {}
We get an error saying that Display is required but not implemented:
use std::fmt;
impl fmt::Display for Point {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "({}, {})", self.x, self.y)
}
}
Then implementing the OutlinePrint trait on Point will compile successfully, and we can call outline_print on a Point instance to display it within an outline of asterisks.
Using the Newtype Pattern to Implement External Traits on External Types
we mentioned the orphan rule that states we're only allowed to implement a trait on a type if either the trait or the type are local to our crate. It's possible to get around this restriction using the new type pattern, which involves creating a new type in a tuple struct.
The tuple struct will have one field and be a thin wrapper around the type we want to implement a trait for. Then the wrapper type is local to our crate, and we can implement the trait on the wrapper. New type is a term that originates from the Haskell programming language. There is no runtime performance penalty for using this pattern, and the wrapper type is elided at compile time.
use std::fmt;
struct Wrapper(Vec<String>);
impl fmt::Display for Wrapper {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "[{}]", self.0.join(", "))
}
}
fn main() {
let w = Wrapper(vec![String::from("hello"), String::from("world")]);
println!("w = {}", w);
}
The Rust type system has some features that we've so far mentioned but haven't yet discussed. We'll start by discussing new types in general as we examine why new types are useful as types. Then we'll move on to type aliases, a feature similar to newtypes but with slightly different semantics. We'll also discuss the ! type and dynamically sized types.
Creating Type Synonyms with Type Aliases
Rust provides the ability to declare a type alias to give an existing type another name. For this we use the type keyword. For example, we can create the alias Kilometers to i32 like so:
type Kilometers = i32;
Now, the alias Kilometers is a synonym for i32 , Kilometers is not a separate, new type. Values that have the type Kilometers will be treated the same as values of type i32 :
type Kilometers = i32;
let x: i32 = 5;
let y: Kilometers = 5;
println!("x + y = {}", x + y);
This code is much easier to read and write! Choosing a meaningful name for a type alias can help communicate your intent as well (thunk is a word for code to be evaluated at a later time, so it's an appropriate name for a closure that gets stored).
let f: Box<dyn Fn() + Send + 'static> = Box::new(|| println!("hi"));
fn takes_long_type(f: Box<dyn Fn() + Send + 'static>) {
// --snip--
}
fn returns_long_type() -> Box<dyn Fn() + Send + 'static> {
// --snip--
}
A type alias makes this code more manageable by reducing the repetition. We've introduced an alias named Thunk for the verbose type and can replace all uses of the type with the shorter alias Thunk .
type Thunk = Box<dyn Fn() + Send + 'static>;
let f: Thunk = Box::new(|| println!("hi"));
fn takes_long_type(f: Thunk) {
// --snip--
}
fn returns_long_type() -> Thunk {
// --snip--
}
The Never Type that Never Returns
Rust has a special type named ! that's known in type theory lingo as the empty type because it has no values. We prefer to call it the never type because it stands in the place of the return type when a function will never return.これが例です:
fn bar() -> ! {
// --snip--
}
This code is read as “the function bar returns never.” Functions that return never are called diverging functions . We can't create values of the type ! so bar can never possibly return.
Dynamically Sized Types and the Sized Trait
Rust needs to know certain details about its types, such as how much space to allocate for a value of a particular type. This leaves one corner of its type system a little confusing at first: the concept of dynamically sized types. Sometimes referred to as DSTs or unsized types, these types let us write code using values whose size we can know only at runtime.
Let's dig into the details of a dynamically sized type called str , which we've been using throughout the book. That's right, not &str , but str on its own, is a DST. We can't know how long the string is until runtime, meaning we can't create a variable of type str , nor can we take an argument of type str . Consider the following code, which does not work:
// ! ! Compile Error ! !
let s1: str = "Hello there!";
let s2: str = "How's it going?";
So although a &T is a single value that stores the memory address of where the T is located, a &str is two values: the address of the str and its length. As such, we can know the size of a &str value at compile time: it's twice the length of a usize . That is, we always know the size of a &str , no matter how long the string it refers to is. In general, this is the way in which dynamically sized types are used in Rust: they have an extra bit of metadata that stores the size of the dynamic information. The golden rule of dynamically sized types is that we must always put values of dynamically sized types behind a pointer of some kind.
We can combine str with all kinds of pointers: for example, Box<str> or Rc<str> .
To work with DSTs, Rust provides the Sized trait to determine whether or not a type's size is known at compile time. This trait is automatically implemented for everything whose size is known at compile time. In addition, Rust implicitly adds a bound on Sized to every generic function. That is, a generic function definition like this:
fn generic<T>(t: T) {
// --snip--
}
is actually treated as though we had written this:
fn generic<T: Sized>(t: T) {
// --snip--
}
By default, generic functions will work only on types that have a known size at compile time. However, you can use the following special syntax to relax this restriction:
fn generic<T: ?Sized>(t: &T) {
// --snip--
}
A trait bound on ?Sized means “ T may or may not be Sized ” and this notation overrides the default that generic types must have a known size at compile time. The ?Trait syntax with this meaning is only available for Sized , not any other traits.
Also note that we switched the type of the t parameter from T to &T . Because the type might not be Sized , we need to use it behind some kind of pointer. In this case, we've chosen a reference.
Function Pointers
The fn type is called a function pointer. Passing functions with function pointers will allow you to use functions as arguments to other functions.
The syntax for specifying that a parameter is a function pointer is similar to that of closures, where we've defined a function add_one that adds one to its parameter. The function do_twice takes two parameters: a function pointer to any function that takes an i32 parameter and returns an i32 , and one i32 value . The do_twice function calls the function f twice, passing it the arg value, then adds the two function call results together. The main function calls do_twice with the arguments add_one and 5 .
fn add_one(x: i32) -> i32 {
x + 1
}
fn do_twice(f: fn(i32) -> i32, arg: i32) -> i32 {
f(arg) + f(arg)
}
fn main() {
let answer = do_twice(add_one, 5);
println!("The answer is: {}", answer);
}
This code prints The answer is: 12 . We specify that the parameter f in do_twice is an fn that takes one parameter of type i32 and returns an i32 . We can then call f in the body of do_twice .
In main , we can pass the function name add_one as the first argument to do_twice . Unlike closures, fn is a type rather than a trait, so we specify fn as the parameter type directly rather than declaring a generic type parameter with one of the Fn traits as a trait bound.
Function pointers implement all three of the closure traits ( Fn , FnMut , and FnOnce ), meaning you can always pass a function pointer as an argument for a function that expects a closure. It's best to write functions using a generic type and one of the closure traits so your functions can accept either functions or closures.
We've used macros like println! throughout this book, but we haven't fully explored what a macro is and how it works. The term macro refers to a family of features in Rust: declarative macros with macro_rules! and three kinds of procedural macros:
• Custom #[derive] macros that specify code added with the derive attribute used on structs and enums • Attribute-like macros that define custom attributes usable on any item • Function-like macros that look like function calls but operate on the tokens specified as their argument
The Difference Between Macros and Functions Fundamentally, macros are a way of writing code that writes other code, which is known as meta programming. In Appendix C, we discuss the derive attribute, which generates an implementation of various traits for you. We've also used the println! and vec! Macros throughout the book. All of these macros expand to produce more code than the code you've written manually.
Meta programming is useful for reducing the amount of code you have to write and maintain, which is also one of the roles of functions. However, macros have some additional powers that functions don't.
A function signature must declare the number and type of parameters the function has. Macros, on the other hand, can take a variable number of parameters: we can call println! ("hello") with one argument or println!("hello {}", name) with two arguments. Also, macros are expanded before the compiler interprets the meaning of the code, so a macro can, for example, implement a trait on a given type. A function can't, because it gets called at runtime and a trait needs to be implemented at compile time.
The downside to implementing a macro instead of a function is that macro definitions are more complex than function definitions because you're writing Rust code that writes Rust code. Due to this indirection, macro definitions are generally more difficult to read, understand, and maintain than function definitions.
Another important difference between macros and functions is that you must define macros or bring them into scope before you call them in a file, as opposed to functions you can define anywhere and call anywhere.
Declarative Macros with macro_rules! for General Metaprogramming
The most widely used form of macros in Rust is the declarative macro. These are also sometimes referred to as “macros by example,” “ macro_rules! macros,” or just plain “macros.” At their core, declarative macros allow you to write something similar to a Rust match expression. Match expressions are control structures that take an expression, compare the resulting value of the expression to patterns, and then run the code associated with the matching pattern. Macros also compare a value to patterns that are associated with particular code: in this situation, the value is the literal Rust source code passed to the macro; the patterns are compared with the structure of that source code; and the code associated with each pattern, when matched, replaces the code passed to the macro. This all happens during compilation.
#[macro_export]
macro_rules! vec {
( $( $x:expr ),* ) => {
{
let mut temp_vec = Vec::new();
$(
temp_vec.push($x);
)*
temp_vec
}
};
}
The #[macro_export] annotation indicates that this macro should be made available whenever the crate in which the macro is defined is brought into scope. Without this annotation, the macro can't be brought into scope.
We then start the macro definition with macro_rules! and the name of the macro we're definingwithout the exclamation mark.
The structure in the vec! body is similar to the structure of a match expression. Here we have one arm with the pattern ( $( $x:expr ),* ) , followed by => and the block of code associated with this pattern. If the pattern matches, the associated block of code will be emitted. Given that this is the only pattern in this macro, there is only one valid way to match; any other pattern will result in an error. More complex macros will have more than one arm.
First, we use a set of parentheses to encompass the whole pattern. We use a dollar sign $ to declare a variable in the macro system that will contain the Rust code matching the pattern. The dollar sign makes it clear this is a macro variable as opposed to a regular Rust variable. Next comes a set of parentheses that captures values that match the pattern within the parentheses for use in the replacement code. Within $() is $x:expr , which matches any Rust expression and gives the expression the name $x .
The comma following $() indicates that a literal comma separator character could optionally appear after the code that matches the code in $() . The specifies that the pattern matches zero or more of whatever precedes the .
When we call this macro with vec![1, 2, 3]; , the $x pattern matches three times with the three expressions 1 , 2 , and 3 .
Now let's look at the pattern in the body of the code associated with this arm: temp_vec.push() within $()* is generated for each part that matches $() in the pattern zero or more times depending on how many times the pattern matches. The $x is replaced with each expression matched.
Procedural Macros for Generating Code from Attributes
The second form of macros is the procedural macro, which acts more like a function (and is a type of procedure). Procedural macros accept some code as an input, operate on that code, and produce some code as an output rather than matching against patterns and replacing the code with other code as declarative macros do. The three kinds of procedural macros are custom derive, attribute- like, and function-like, and all work in a similar fashion.
When creating procedural macros, the definitions must reside in their own crate with a special crate type. This is for complex technical reasons that we hope to eliminate in the future.
use proc_macro;
#[some_attribute]
pub fn some_name(input: TokenStream) -> TokenStream {}
The function that defines a procedural macro takes a TokenStream as an input and produces a TokenStream as an output. The TokenStream type is defined by the proc_macro crate that is included with Rust and represents a sequence of tokens. This is the core of the macro: the source code that the macro is operating on makes up the input TokenStream , and the code the macro produces is the output TokenStream . The function also has an attribute attached to it that specifies which kind of procedural macro we're creating. We can have multiple kinds of procedural macros in the same crate.
How to Write a Custom derive Macro
Let's create a crate named hello_macro that defines a trait named HelloMacro with one associated function named hello_macro . Rather than making our users implement the HelloMacro trait for each of their types, we'll provide a procedural macro so users can annotate their type with #[derive(HelloMacro)] to get a default implementation of the hello_macro function. The default implementation will print Hello, Macro! My name is TypeName! where TypeName is the name of the type on which this trait has been defined.
use hello_macro::HelloMacro;
use hello_macro_derive::HelloMacro;
#[derive(HelloMacro)]
struct Pancakes;
fn main() {
Pancakes::hello_macro();
}
Attribute-like macros
Attribute-like macros are similar to custom derive macros, but instead of generating code for the derive attribute, they allow you to create new attributes. They're also more flexible: derive only works for structs and enums; attributes can be applied to other items as well, such as functions.
Here's an example of using an attribute-like macro: say you have an attribute named route that annotates functions when using a web application framework:
#[route(GET, "/")]
fn index() {
his #[route] attribute would be defined by the framework as a procedural macro. The signature of the macro definition function would look like this:
#[proc_macro_attribute]
pub fn route(attr: TokenStream, item: TokenStream) -> TokenStream {
Here, we have two parameters of type TokenStream . The first is for the contents of the attribute: the GET, "/" part. The second is the body of the item the attribute is attached to: in this case, fn index() {} and the rest of the function's body
Function-like macros
Function-like macros define macros that look like function calls. Similarly to macro_rules! Macros, they're more flexible than functions; for example, they can take an unknown number of arguments. However, macro_rules! macros can be defined only using the match-like syntax we discussed in the section “Declarative Macros with macro_rules! for General Metaprogramming” earlier. Function-like macros take a TokenStream parameter and their definition manipulates that TokenStream using
Rust code as the other two types of procedural macros do. An example of a function-like macro is an sql! macro that might be called like so:
let sql = sql!(SELECT * FROM posts WHERE id=1);
This macro would parse the SQL statement inside it and check that it's syntactically correct, which is much more complex processing than a macro_rules! macro can do. The sql! macro would be defined like this:
#[proc_macro]
pub fn sql(input: TokenStream) -> TokenStream {
• as - perform primitive casting, disambiguate the specific trait containing an item, or rename items in use statements • async - return a Future instead of blocking the current thread • await - suspend execution until the result of a Future is ready • break - exit a loop immediately • const - define constant items or constant raw pointers • continue - continue to the next loop iteration • crate - in a module path, refers to the crate root • dyn - dynamic dispatch to a trait object • else - fallback for if and if let control flow constructs • enum - define an enumeration • extern - link an external function or variable • false - Boolean false literal • fn - define a function or the function pointer type • for - loop over items from an iterator, implement a trait, or specify a higher-ranked lifetime • if - branch based on the result of a conditional expression • impl - implement inherent or trait functionality • in - part of for loop syntax • let - bind a variable • loop - loop unconditionally • match - match a value to patterns • mod - define a module • move - make a closure take ownership of all its captures • mut - denote mutability in references, raw pointers, or pattern bindings • pub - denote public visibility in struct fields, impl blocks, or modules • ref - bind by reference • return - return from function • Self - a type alias for the type we are defining or implementing • self - method subject or current module • static - global variable or lifetime lasting the entire program execution • struct - define a structure • super - parent module of the current module • trait - define a trait • true - Boolean true literal • type - define a type alias or associated type • union - define a union; is only a keyword when used in a union declaration • unsafe - denote unsafe code, functions, traits, or implementations • use - bring symbols into scope • where - denote clauses that constrain a type • while - loop conditionally based on the result of an expression
Keywords Reserved for Future Use • abstract • become • box • do • final • macro • override • priv • try • typeof • unsized • virtual • yield
Raw identifiers are the syntax that lets you use keywords where they wouldn't normally be allowed. You use a raw identifier by prefixing a keyword with r#
fn r#match(needle: &str, haystack: &str) -> bool {
haystack.contains(needle)
}
fn main() {
assert!(r#match("foo", "foobar"));
}
This code will compile without any errors. Note the r# prefix on the function name in its definition as well as where the function is called in main .
| オペレーター | 例 | 説明 | Overloadable? |
|---|---|---|---|
| ! | ident!(...) , | ||
| ident!{...} , | Macro expansion | ||
| ident![...] | |||
| ! | !expr | Bitwise or logical complement | ない |
| != | expr != expr | Nonequality comparison | PartialEq |
| % | expr % expr | Arithmetic remainder | rem |
| %= | var %= expr | Arithmetic remainder and | |
| 割り当て | RemAssign | ||
| & | &expr , &mut expr | 借りる | |
| & | &type , &mut type, | Borrowed pointer type | |
| &'a type , &'a mut | |||
| タイプ | |||
| & | expr & expr | Bitwise AND | BitAnd |
| &= | var &= expr | Bitwise AND and assignment | BitAndAssign |
| && | expr && expr | Short-circuiting logical AND | |
| * | expr * expr | Arithmetic multiplication | Mul |
| *= | var *= expr | Arithmetic multiplication and assignment | MulAssign |
| * | *expr | Dereference | Deref |
| * | *const type , *mut type | Raw pointer | |
| + | trait + trait , 'a + trait | Compound type constraint | |
| + | expr + expr | Arithmetic addition | 追加 |
| += | var += expr | Arithmetic addition and assignment | AddAssign |
| 、 | expr, expr | Argument and element separator | |
| - | - expr | Arithmetic negation | Neg |
| - | expr – expr | Arithmetic subtraction | サブ |
| -= | var -= expr | Arithmetic subtraction and assignment | SubAssign |
| → | fn(...) -> type | Function and closure return type | |
| 。 | expr.ident | Member access | |
| .. | .. , expr.. , ..expr , expr..expr | Right-exclusive range literal | PartialOrd |
| ..= | ..=expr , expr..=expr | Right-inclusive range literal | PartialOrd |
| .. | ..expr | Struct literal update syntax | |
| .. | variant(x, ..) ,struct_type { x, .. } | “And the rest” pattern binding | |
| … | expr...expr | (Deprecated, use ..= instead) In a pattern: inclusive range pattern | |
| / | expr / expr | Arithmetic division | div |
| /= | var /= expr | Arithmetic division and assignment | DivAssign |
| : | pat: type , ident: type | Constraints | |
| : | ident: expr | Struct field initializer | |
| : | 'a: loop {…} | Loop label | |
| ; | expr; | Statement and item terminator | |
| ; | [...; len] | Part of fixed-size array syntax | |
| << | expr << expr | Left-shift | Shl |
| <<= | var <<= expr | Left-shift and assignment | ShlAssign |
| < | expr < expr | Less than comparison | PartialOrd |
| <= | expr <= expr | Less than or equal to comparison | PartialOrd |
| = | var = expr , ident = type | Assignment/equivalence | |
| == | expr == expr | Equality comparison | PartialEq |
| => | pat => expr | Part of match arm syntax | |
| > | expr > expr | Greater than comparison | PartialOrd |
| >= | expr >= expr | Greater than or equal to comparison | PartialOrd |
| >> | expr >> expr | Right-shift | Shr |
| >>= | var >>= expr | Right-shift and assignment | ShrAssign |
| @ | ident @ pat | Pattern binding | |
| ^ | expr ^ expr | Bitwise exclusive OR | BitXor |
| ^= | var ^= expr | Bitwise exclusive OR and assignment | BitXorAssign |
| ` | ` | pat ` | ` pat |
| ` | ` | `expr | expr` |
| ` | =` | `var | = expr` |
| ` | ` | `expr | |
| ? | expr? | Error propagation |
The following list contains all symbols that don't function as operators; that is, they don't behave like a function or method call.
| シンボル | 説明 |
|---|---|
| 'ident | Named lifetime or loop label |
| ...u8 , ...i32 , ...f64 , ...usize , etc. | Numeric literal of specific type |
| "…" | String literal |
| r"..." , r#"..."# , r##"..."## , etc. | Raw string literal, escape characters not processed |
| b"…" | Byte string literal; constructs an array of bytes instead of a string |
| br"..." , br#"..."# , br##"..."## , etc. | Raw byte string literal, combination of raw and byte string literal |
| '…' | Character literal |
| b'…' | ASCII byte literal |
| ` | ... |
| ! | Always empty bottom type for diverging functions |
| _ | “Ignored” pattern binding; also used to make integer literals readable |
| シンボル | 説明 |
|---|---|
| ident::ident | Namespace path |
| ::パス | Path relative to the crate root (ie, an explicitly absolute path) |
| self::path | Path relative to the current module (ie, an explicitly relative path). |
| super::path | Path relative to the parent of the current module |
| type::ident , <'type as trait>::ident | Associated constants, functions, and types |
| <'type>::… | Associated item for a type that cannot be directly named (eg, <&T>::... , <[T]>::... , etc.) |
| trait::method(…) | Disambiguating a method call by naming the trait that defines it |
| type::method(…) | Disambiguating a method call by naming the type for which it's defined |
| <'type as trait>::method(...) | Disambiguating a method call by naming the trait and type |
Generics
| シンボル | 説明 |
|---|---|
| path<...> | Specifies parameters to generic type in a type (eg, Vec ) |
| path::<...> , | Specifies parameters to generic type, function, or method in an |
| method::<...> | 表現; often referred to as turbofish (eg, "42".parse::<'i32>() ) |
| fn ident<...> … | Define generic function |
| struct ident<...> ... | Define generic structure |
| enum ident<...> … | Define generic enumeration |
| impl<...> … | Define generic implementation |
| for<...> | type Higher-ranked lifetime bounds |
| type<ident=type> | A generic type where one or more associated types have specific assignments (eg, Iterator<Item=T> ) |
Trait Bound Constraints
| シンボル | 説明 |
|---|---|
| T: U | Generic parameter T constrained to types that implement U |
| T: 'a | Generic type T must outlive lifetime 'a (meaning the type cannot transitively contain any references with lifetimes shorter than 'a ) |
| T: 'static | Generic type T contains no borrowed references other than 'static ones |
| 'b: 'a | Generic lifetime 'b must outlive lifetime 'a |
| T: ?Sized | Allow generic type parameter to be a dynamically sized type |
| 'a + trait , trait + trait | Compound type constraint |
Macros and Attributes
| シンボル | 説明 |
|---|---|
| #[meta] | Outer attribute |
| #![meta] | Inner attribute |
| $ident | Macro substitution |
| $ident:kind | Macro capture |
| $(…)…Macro repetition ident!(...) , ident!{...} , ident![...] | Macro invocation |
コメント
| シンボル | 説明 |
|---|---|
| // | Line comment |
| //! | Inner line doc comment |
| /// | Outer line doc comment |
| / ... / | Block comment |
| / !... / | Inner block doc comment |
| /**...*/ | Outer block doc comment |
Tuples
| シンボル | 説明 |
|---|---|
| () | Empty tuple (aka unit), both literal and type |
| (expr) | Parenthesized expression |
| (expr,) | Single-element tuple expression |
| (タイプ、) | Single-element tuple type |
| (expr, …) | Tuple expression |
| (タイプ、 …) | Tuple type |
| expr(expr, …) | Function call expression; also used to initialize tuple struct s and tuple enum variants |
| expr.0 , expr.1 , etc. | Tuple indexing |
Square Brackets
| コンテクスト | 説明 |
|---|---|
| […] | Array literal |
| [expr; len] | Array literal containing len copies of expr |
| [タイプ; len] | Array type containing len instances of type |
| expr[expr] | Collection indexing. Overloadable ( Index , IndexMut ) |
| expr[..] , expr[a..], | Collection indexing pretending to be collection slicing, using |
| expr[..b] , expr[a..b] | Range , RangeFrom , RangeTo , or RangeFull as the “index” |