Nota: A simultaneidade, o fechamento e as macros não são cobertas detalhadas. Por favor me corrija onde quer que eu esteja errado
Criei essas notas para meu uso pessoal, espero que esta nota ajude alguém.
Criando um projeto com carga
$ cargo new hello_cargo --bin // ‘--lib’ for library
Nota : Você pode alterar a carga nova para usar um sistema de controle de versão diferente ou nenhum sistema de controle de versão usando o sinalizador
--vcs.
O formato de Toml (óbvio linguagem mínima de Tom), que é o formato de configuração da Cargo.
Cargo.lock. Este arquivo acompanha as versões exatas das dependências em seu projeto.
A carga também fornece um comando chamado cargo check . Este comando verifica rapidamente seu código para garantir que ele compila, mas não produz um executável
Edifício para liberação
Quando o seu projeto estiver finalmente pronto para ser lançado, você pode usar cargo build --release para compilá -lo com otimizações. Este comando criará um executável no Target/Release em vez de Target/Debug . As otimizações tornam o seu código de ferrugem mais rápido, mas ativá -las aumentam o tempo necessário para o seu programa compilar.
É por isso que existem dois perfis diferentes: um para o desenvolvimento, quando você deseja reconstruir de maneira rápida e frequente, e outra para criar o programa final que você dará a um usuário que não será reconstruído repetidamente e que será o mais rápido possível. Se você estiver marcando o tempo de execução do seu código, execute cargo build –release e referência com o executável no destino/liberação.
_let mut guess = String::new();
A sintaxe :: na ::new linha indica que o novo é uma função associada do tipo de string. Uma função associada é implementada em um tipo, nesta sequência de casos, e não em uma instância específica de uma string. Alguns idiomas chamam isso de método estático.
A sintaxe // inicia um comentário que continua até o final da linha.
Atualizando uma caixa para obter uma nova versão
Quando você deseja atualizar um caixote, a carga fornece outro comando, atualização, que ignora o arquivo de Cargo.lock e descubra todas as versões mais recentes que atendem às suas especificações em Cargo.toml . Se isso funcionar, a carga escreverá essas versões no arquivo Cargo.lock . Mas, por padrão, a carga procurará apenas versões maiores que 0.3.0 e menor que 0.4.0 . Se o rand Crate lançou duas novas versões, 0.3.15 e 0.4.0 , você veria o seguinte se executasse atualização de carga.
Outro recurso interessante da carga é que você pode executar o comando de
cargo doc --open, que criará documentação fornecida por todas as suas dependências localmente e a abrirá no seu navegador.
A seguir, é apresentada uma lista de palavras -chave atualmente em uso, com sua funcionalidade descrita.
as - Realize elenco primitivo, desambiguem a característica específica que contém um item ou renomeie itens em uso de declarações de use
async - Retorne um Future em vez de bloquear o tópico atual
await - suspenda a execução até que o resultado de um Future esteja pronto
break - Saia de um loop imediatamente
const - defina itens constantes ou indicadores crus constantes
continue - continue até a próxima iteração do loop
crate - Em um caminho de módulo, refere -se à raiz da caixa
dyn - Despacho dinâmico para um objeto de característica
else - fallback para if e if let o controle de fluxo constrói
enum - defina uma enumeração
extern - vincular uma função externa ou variável
false - Booleano Falso Literal
fn - defina uma função ou o tipo de ponteiro de função
for - loop sobre itens de um iterador, implemente uma característica ou especifique uma vida útil de classificação mais alta
if - ramificar com base no resultado de uma expressão condicional
impl - Implemente a funcionalidade inerente ou de característica
in - parte for sintaxe do loop
let - vincular uma variável
loop - Loop incondicionalmente
match - corresponda a um valor com os padrões
mod - defina um módulo
move - Faça um fechamento se apropriar de todas as suas capturas
mut - denotar mutabilidade em referências, ponteiros crus ou ligações de padrões
pub - denotar visibilidade pública em campos de estrutura, blocos impl ou módulos
ref - vincular por referência
return - Retorno da função
Self - um pseudônimo de tipo para o tipo que estamos definindo ou implementando
Assunto self -método ou módulo atual
static - variável global ou vida duradoura em toda a execução do programa
struct - defina uma estrutura
super - módulo pai do módulo atual
trait - defina uma característica
true - booleano verdadeiro literal
type - Defina um alias de tipo ou tipo associado
union - defina uma união; é apenas uma palavra -chave quando usada em uma declaração sindical
unsafe - denotar código inseguro, funções, características ou implementações
use - traga símbolos para o escopo
where - denotar cláusulas que restringem um tipo
while - loop condicionalmente com base no resultado de uma expressão
As seguintes palavras -chave ainda não têm nenhuma funcionalidade, mas são reservadas pela ferrugem para potencial uso futuro.
abstract
become
box
do
final
macro
override
priv
try
typeof
unsized
virtual
yield
Identificadores brutos são a sintaxe que permite usar palavras -chave onde normalmente não seriam permitidas. Você usa um identificador bruto prefixando uma palavra -chave com
r#.
Constantes
Primeiro, você não tem permissão para usar mut com constantes. As constantes não são apenas imutáveis por padrão, são sempre imutáveis. Você declara constantes usando a palavra -chave const em vez da palavra -chave let , e o tipo do valor deve ser anotado .
const THREE_HOURS_IN_SECONDS: u32 = 60 * 60 * 3;
A última diferença é que as constantes podem ser definidas apenas para uma expressão constante , não o resultado de um valor que só poderia ser calculado em tempo de execução.
As constantes são válidas durante todo o tempo em que um programa é executado, dentro do escopo em que foram declarados.
Shadowing Você pode declarar uma nova variável com o mesmo nome que uma variável anterior. Rustaceans diz que a primeira variável é sombreada pelo segundo, o que significa que a segunda variável é o que o compilador verá quando você usar o nome da variável. No efeito, a segunda variável ofusca o primeiro, pegando qualquer uso do nome da variável para si mesmo, até que seja sombreada ou o escopo termina. Podemos sombrear uma variável usando o nome da mesma variável e repetindo o uso da palavra -chave let da seguinte forma:
fn main() {
let x = 5;
let x = x + 1;
{
let x = x * 2;
println!("The value of x in the inner scope is: {x}");
}
println!("The value of x is: {x}");
}
O sombreamento é diferente de marcar uma variável como mut , porque receberemos um erro de tempo de compilação se tentarmos transmitir acidentalmente a essa variável sem usar a palavra-chave let . Ao usar let , podemos executar algumas transformações em um valor, mas a variável será imutável após a conclusão dessas transformações.
A outra diferença entre mut e Shadowing é que, como estamos efetivamente criando uma nova variável quando usamos a palavra -chave let novamente, podemos alterar o tipo do valor, mas reutilizar o mesmo nome. Por exemplo, digamos que nosso programa pede a um usuário que mostre quantos espaços eles desejam entre algum texto inserindo caracteres espaciais e, em seguida, queremos armazenar essa entrada como um número:
let spaces = " ";
let spaces = spaces.len();
A primeira variável de espaços é um tipo de string e a variável de segundo espaços é um tipo de número. O sombreamento, portanto, poupa -nos de ter que criar nomes diferentes, como spaces_str e spaces_num ; Em vez disso, podemos reutilizar o nome dos espaços mais simples. No entanto, se tentarmos usar mut para isso, como mostrado aqui, receberemos um erro de tempo de compilação:
let mut spaces = " "; // Compile-time Error
spaces = spaces.len();
Tipos de dados
Um tipo escalar representa um único valor. A ferrugem possui quatro tipos escalares primários: números inteiros, números de ponto flutuante, booleanos e caracteres.
| Comprimento | Assinado | Não assinado |
|---|---|---|
| 8 bits | i8 | U8 |
| 16 bits | I16 | U16 |
| 32 bits | i32 | U32 |
| 64 bits | i64 | U64 |
| 128 bits | I128 | U128 |
| arco | Isize | usize |
Cada variante assinada pode armazenar números de -(2n - 1) a 2n - 1 - 1 inclusive, onde n é o número de bits que a variante usa.
NOTA: Esse número literais que podem ser vários tipos numéricos permitem um tipo suficiente, como
57u8, para designar o tipo. Os literais do número também podem usar_como um separador visual para facilitar a leitura do número, como1_000, que terá o mesmo valor como se você tivesse especificado1000.
| Número literais | Exemplo |
|---|---|
| Decimal | 98_222 |
| Hexadecimal | 0xff |
| Octal | 0o77 |
| Binário | 0b1111_0000 |
| Byte (apenas U8) | b'A' |
Excesso inteiro
Digamos que você tenha uma variável do tipo u8 que pode conter valores entre 0 e 255. Se você tentar alterar a variável para um valor fora desse intervalo, como 256, ocorrerá uma overblow inteiro, o que pode resultar em um dos dois comportamentos. Quando você está compilando no modo de depuração, a ferrugem inclui verificações de excedência inteira que fazem com que seu programa entre em pânico no tempo de execução, se esse comportamento ocorrer. Rust usa o termo em pânico quando um programa sai com um erro.
Quando você está compilando no modo de liberação com o FLA --release , a ferrugem não inclui verificações de excedência inteira que causam pânico. Em vez disso, se ocorrer o excesso de fluxo, a ferrugem executa o embrulho do complemento de dois. Em resumo, valores maiores que o valor máximo que o tipo pode manter "envolver" no mínimo dos valores que o tipo pode conter. No caso de um u8 , o valor 256 se torna 0 , o valor 257 se torna 1 e assim por diante. O programa não entra em pânico, mas a variável terá um valor que provavelmente não é o que você esperava que ele tenha. Contar com o comportamento de embrulho de excesso de fluxo inteiro é considerado um erro.
Para lidar explicitamente a possibilidade de excedência, você pode usar essas famílias de métodos fornecidos pela Biblioteca Padrão para tipos numéricos primitivos:
• Enrole em todos os modos com os métodos wrapping_* , como wrapping_add
• Retorne o valor nenhum, se houver excedência com os métodos checked_*
• Retornar o valor e um booleano indicando se houve excesso de fluxo com os métodos overflowing_*
• Sature nos valores mínimos ou máximos do valor com os métodos saturating_*
Tipos de ponto flutuante
A ferrugem também possui dois tipos primitivos para números de ponto de flutuação, que são números com pontos decimais. Os tipos de ponto flutuante da ferrugem são f32 e f64 , que são 32 bits e 64 bits de tamanho, respectivamente. O tipo padrão é f64 porque, nas CPUs modernas, é aproximadamente a mesma velocidade que f32 , mas é capaz de mais precisão. Todos os tipos de ponto flutuante são assinados .
Os números de ponto flutuante são representados de acordo com o padrão IEEE-754. O tipo F32 é uma flutuação de precisão única e F64 tem dupla precisão.
Operações numéricas
A Rust suporta as operações matemáticas básicas que você esperaria para todos os tipos de números: adição, subtração, multiplicação, divisão e restante. A divisão inteira entra no número inteiro mais próximo.
fn main() {
let sum = 5 + 10; // addition
let difference = 95.5 - 4.3; // subtraction
let product = 4 * 30; // multiplication
let quotient = 56.7 / 32.2; // division
let floored = 2 / 3; // Results in 0
let remainder = 43 % 5; // remainder
}
O tipo booleano
Como na maioria das outras linguagens de programação, um tipo booleano na ferrugem tem dois valores possíveis: true e false . Booleanos têm um byte de tamanho. O tipo booleano na ferrugem é especificado usando bool .
O tipo de personagem
O tipo de char de Rust é o tipo alfabético mais primitivo do idioma. Aqui estão alguns exemplos de declarar valores de char:
fn main() {
let c = 'z';
let z: char = 'ℤ'; // with explicit type annotation
}
Observe que especificamos
charliterais com citações únicas, em oposição aos literaisstring, que usam citações duplas. O tipo de char de Rust tem quatro bytes de tamanho e representa um valor escalar unicode, o que significa que pode representar muito mais do que apenas ASCII . Cartas acentuadas; Personagens chineses, japoneses e coreanos; emoji; e espaço de largura zero são todos valores de char válidos na ferrugem. Os valores escalares do Unicode variam deU+0000aU+D7FFeU+E000aU+10FFFFinclusive.
Tipos de compostos
Os tipos de compostos podem agrupar vários valores em um tipo. A ferrugem tem dois tipos de compostos primitivos: tuples e arrays .
O tipo de tupla
Uma tupla é uma maneira geral de agrupar vários valores com uma variedade de tipos em um tipo de composto. As tuplas têm um comprimento fixado: uma vez declaradas, elas não podem crescer ou encolher de tamanho. Criamos uma tupla escrevendo uma lista de valores separada por vírgula dentro de parênteses. Cada posição na tupla tem um tipo e os tipos de valores diferentes na tupla não precisam ser os mesmos.
fn main() {
let tup: (i32, f64, u8) = (500, 6.4, 1);
}
A variável tup se liga a toda a tupla, porque uma tupla é considerada um único elemento composto. Para tirar os valores individuais de uma tupla, podemos usar a correspondência de padrões para destruir um valor de tupla, assim:
fn main() {
let tup = (500, 6.4, 1);
let (x, y, z) = tup;
println!("The value of y is: {y}");
}
Este programa primeiro cria uma tupla e a liga à variável tup . Em seguida, ele usa um padrão com let to tup e transformá -lo em três variáveis separadas, x , y e z . Isso é chamado de destruição, porque quebra a tupla única em três partes. Finalmente, o programa imprime o valor de y , que é 6.4 .
Também podemos acessar um elemento de tupla diretamente usando um período ( . ) Seguido pelo índice do valor que queremos acessar. Por exemplo:
fn main() {
let x: (i32, f64, u8) = (500, 6.4, 1);
let five_hundred = x.0;
let six_point_four = x.1;
let one = x.2;
}
Este programa cria a tupla x e depois acessa cada elemento da tupla usando seus respectivos índices. Como na maioria das linguagens de programação, o primeiro índice em uma tupla é 0 . A tupla sem valores tem um nome especial, unit . Este valor e seu tipo correspondente são escritos () e representam um valor vazio ou um tipo de retorno vazio. Expressões retornam implicitamente o valor da unidade se não retornarem nenhum outro valor.
O tipo de matriz
Outra maneira de ter uma coleção de vários valores é com uma matriz. Ao contrário de uma tupla, todo elemento de uma matriz deve ter o mesmo tipo. Ao contrário das matrizes em alguns outros idiomas, as matrizes em ferrugem têm um comprimento fixado. Escrevemos os valores em uma matriz como uma lista separada por vírgula dentro de suportes quadrados:
fn main() {
let a = [1, 2, 3, 4, 5];
}
As matrizes são úteis quando você deseja que seus dados sejam alocados na pilha, em vez da pilha ou quando você deseja garantir que sempre tenha um número fixo de elementos. Uma matriz não é tão flexível quanto o tipo vetorial. Um vetor é um tipo de coleção semelhante fornecido pela biblioteca padrão que pode crescer ou diminuir de tamanho.
Você escreve um tipo de matriz usando suportes quadrados com o tipo de cada elemento, um semicolon e, em seguida, o número de elementos na matriz, assim:
let a: [i32; 5] = [1, 2, 3, 4, 5];
Você também pode inicializar uma matriz para conter o mesmo valor para cada elemento, especificando o valor inicial, seguido por um semicolon e, em seguida, o comprimento da matriz entre colchetes, como mostrado aqui:
let a = [3; 5];
A matriz denominada A conterá 5 elementos que serão definidos para o valor 3 inicialmente. É o mesmo que escrever, que a = [3, 3, 3, 3, 3]; Mas de uma maneira mais concisa.
Acessando elementos da matriz
Uma matriz é um único pedaço de memória de um tamanho conhecido, que pode ser alocado na pilha. Você pode acessar elementos de uma matriz usando indexação, assim:
fn main() {
let a = [1, 2, 3, 4, 5];
let first = a[0];
let second = a[1];
}
O código de ferrugem usa o caso de cobra como estilo convencional para nomes de função e variável, nos quais todas as letras são minúsculas e sublinham palavras separadas.
Declarações e expressões
Os corpos da função são compostos por uma série de declarações, opcionalmente, terminando em uma expressão. Até agora, as funções que abordamos não incluíram uma expressão final, mas você viu uma expressão como parte de uma declaração. Como a ferrugem é uma linguagem baseada em expressão , essa é uma distinção importante para entender. Outros idiomas não têm as mesmas distinções, então vamos ver quais declarações e expressões são e como suas diferenças afetam os corpos das funções.
As declarações são instruções que executam alguma ação e não retornam um valor. Expressões avaliam para um valor resultante. Vejamos alguns exemplos.
Na verdade, já usamos declarações e expressões. Criar uma variável e atribuir um valor a ele com a palavra -chave let é uma instrução.
fn main() {
let y = 6;
}
As expressões avaliam um valor e compõem a maior parte do restante do código que você escreverá no Rust. Considere uma operação de matemática, como 5 + 6 , que é uma expressão que avalia o valor 11 . Expressões podem fazer parte das declarações, o 6 na declaração let y = 6; é uma expressão que avalia o valor 6 . Chamar uma função é uma expressão. Chamar uma macro é uma expressão. Um novo bloco de escopo criado com colchetes encaracolados é uma expressão, por exemplo:
fn main() {
let y = {
let x = 3;
x + 1
};
println!("The value of y is: {y}");
}
Esta expressão:
{
let x = 3;
x + 1
}
é um bloco que, neste caso, avalia para 4 . Esse valor é vinculado a y como parte da declaração let . Observe que a linha x + 1 não possui um ponto e vírgula no final, ao contrário da maioria das linhas que você viu até agora. As expressões não incluem o término dos semicolons . Se você adicionar um semicolon ao final de uma expressão, você a transformará em uma declaração e ela não retornará um valor. Lembre -se disso ao explorar os valores e expressões de retorno da função a seguir.
Funções com valores de retorno
As funções podem retornar valores ao código que os chama. Não citamos valores de retorno, mas devemos declarar o tipo deles após uma seta -> . Na ferrugem, o valor de retorno da função é sinônimo do valor da expressão final no bloco do corpo de uma função. Você pode retornar cedo de uma função usando a palavra -chave de return e especificando um valor, mas a maioria das funções retorna a última expressão implicitamente. Aqui está um exemplo de uma função que retorna um valor:
fn five() -> i32 {
5
}
fn main() {
let x = five();
println!("The value of x is: {x}");
}
Se expressões
Uma expressão if permite ramificar seu código, dependendo das condições. Você fornece uma condição e, em seguida, afirma: "Se essa condição for atendida, execute este bloco de código. Se a condição não for atendida, não execute esse bloco de código". Blocos de código associados às condições em If Expressões às vezes são chamadas de braços, assim como os braços nas expressões de correspondência
fn main() {
let number = 3;
if number < 5 {
println!("condition was true");
} else {
println!("condition was false");
}
}
Lidar com várias condições com outra pessoa se
Você pode usar várias condições combinando se e else em um caso se expressão. Por exemplo:
fn main() {
let number = 6;
if number % 4 == 0 {
println!("number is divisible by 4");
} else if number % 3 == 0 {
println!("number is divisible by 3");
} else if number % 2 == 0 {
println!("number is divisible by 2");
} else {
println!("number is not divisible by 4, 3, or 2");
}
}
Quando este programa é executado, ele verifica cada uma se a expressão por sua vez e executa o primeiro corpo para o qual a condição é verdadeira. Observe que, embora 6 seja divisível por 2, não vemos que o número de saída seja divisível por 2, nem vemos que o número não é divisível por 4, 3 ou 2 texto do bloco else. Isso ocorre porque a ferrugem executa apenas o bloco para a primeira condição verdadeira e, uma vez que encontra um, nem sequer verifica o resto.
Usando se em uma declaração Let
Porque se for uma expressão, podemos usá -la no lado direito de uma instrução Let para atribuir o resultado a uma variável,
fn main() {
let condition = true;
let number = if condition { 5 } else { 6 };
println!("The value of number is: {number}");
}
A ferrugem tem três tipos de loops:
loop,while, efor.
A palavra -chave loop diz a Rust para executar um bloco de código repetidamente para sempre ou até que você diga explicitamente para parar.
Valores retornando de loops
Um dos usos de um loop é tentar novamente uma operação que você sabe que pode falhar, como verificar se no Thread concluiu seu trabalho. Você também pode precisar passar o resultado dessa operação fora do loop para o restante do seu código. Para fazer isso, você pode adicionar o valor que deseja retornar após a expressão de quebra que usa para parar o loop; Esse valor será devolvido do loop para que você possa usá -lo, como mostrado aqui:
fn main() {
let mut counter = 0;
let result = loop {
counter += 1;
if counter == 10 {
break counter * 2;
}
};
println!("The result is {result}");
}
Rótulos de loop para desambiguar entre vários loops
Se você tiver loops dentro de loops, quebre e continue se aplique no loop mais interno nesse ponto. Opcionalmente, você pode especificar uma etiqueta de loop em um loop que podemos usar com o Break ou continuar especificando que essas palavras -chave se aplicam ao loop rotulado em vez do loop mais interno. Os rótulos de loop devem começar com uma única cotação. Aqui está um exemplo com dois loops aninhados:
fn main() {
let mut count = 0;
'counting_up: loop {
println!("count = {count}");
let mut remaining = 10;
loop {
println!("remaining = {remaining}");
if remaining == 9 {
break;
}
if count == 2 {
break 'counting_up;
}
remaining -= 1;
}
count += 1;
}
println!("End count = {count}");
}
O loop externo possui o rótulo 'counting_up , e contará de 0 a 2 . O loop interno sem um rótulo conta de 10 a 9 . A primeira quebra que não especifica um rótulo sairá apenas do loop interno. O break 'counting_up; A declaração sairá do loop externo.
Para
Aqui está como seria a contagem regressiva usar um loop for para e outro método que ainda não falamos, Rev, para reverter o intervalo:
fn main() {
for number in (1..4).rev() {
println!("{number}!");
}
println!("LIFTOFF!!!");
}
A propriedade é um conjunto de regras que governa como um programa de ferrugem gerencia a memória. Todos os programas precisam gerenciar a maneira como eles usam a memória de um computador durante a execução. Alguns idiomas têm coleta de lixo que procura regularmente a memória não usada à medida que o programa é executado; Em outros idiomas, o programador deve alocar e libertar explicitamente a memória. Rust usa uma terceira abordagem: a memória é gerenciada por meio de um sistema de propriedade com um conjunto de regras que o compilador verifica. Se alguma das regras for violada, o programa não será compilado. Nenhuma das características da propriedade desacelerará seu programa enquanto estiver em execução.
Regras de propriedade
Primeiro, vamos dar uma olhada nas regras de propriedade. Lembre -se dessas regras enquanto trabalhamos nos exemplos que os ilustram:
• Cada valor na ferrugem tem um proprietário.
• Só pode haver um proprietário de cada vez.
• Quando o proprietário sair do escopo, o valor será retirado.
Memória e alocação
No caso de uma string literal, conhecemos o conteúdo no momento da compilação, para que o texto seja codificado diretamente no executável final. É por isso que os literais das cordas são rápidos e eficientes. Mas essas propriedades vêm apenas da imutabilidade literal da corda. Infelizmente, não podemos colocar uma bolha de memória no binário para cada peça de texto cujo tamanho é desconhecido no tempo de compilação e cujo tamanho pode mudar durante a execução do programa.
Com o tipo de string, para suportar um texto mutável e cultivável, precisamos alocar uma quantidade de memória na pilha, desconhecida no momento da compilação, para manter o conteúdo. Isso significa:
• A memória deve ser solicitada do allocator memória em tempo de execução.
• Precisamos de uma maneira de devolver essa memória ao allocator quando terminarmos a String .
Essa primeira parte é feita por nós: quando chamamos String::from , sua implementação solicita a memória necessária. Isso é praticamente universal em linguagens de programação.
No entanto, a segunda parte é diferente. Em idiomas com um coletor de lixo (GC), o GC acompanha e limpa a memória que não está mais sendo usada, e não precisamos pensar nisso. Na maioria dos idiomas sem um GC, é nossa responsabilidade identificar quando a memória não está mais sendo usada e chama o código para libertá -lo explicitamente, assim como fizemos para solicitá -la. Fazer isso corretamente tem sido historicamente um problema de programação difícil. Se esquecermos, desperdiçaremos memória. Se fizermos isso muito cedo, teremos uma variável inválida. Se o fizermos duas vezes, isso também é um bug. Precisamos emparelhar exatamente um alocar exatamente com um livre.
A ferrugem segue um caminho diferente: a memória é retornada automaticamente quando a variável que a possui sai do escopo.
{
let s = String::from("hello");// s is valid from this point //forward
// do stuff with s
} // this scope is now over, and s is no
// longer valid
Há um ponto natural em que podemos devolver a memória que nossa string precisa ao allocator: quando s sai do escopo. Quando uma variável sai do escopo, Rust chama uma função especial para nós. Essa função é chamada de drop e é onde o autor de String pode colocar o código para retornar a memória. As chamadas de ferrugem drop automaticamente no suporte encaracolado de fechamento.
Variáveis e dados de maneiras interagem: clone
Se desejarmos copiar profundamente os dados da heap da string, não apenas os dados da pilha, podemos usar um método comum chamado clone .
let s1 = String::from("hello");
let s2 = s1.clone();
println!("s1 = {}, s2 = {}", s1, s2);
Dados somente para pilha: cópia
let x = 5;
let y = x;
println!("x = {}, y = {}", x, y);
Mas esse código parece contradizer o que acabamos de aprender: não temos um chamado para clone , mas x ainda é válido e não foi transferido para y
O motivo é que tipos como números inteiros que têm um tamanho conhecido no tempo de compilação são armazenados inteiramente na pilha, para que cópias dos valores reais sejam rápidas de fazer. Isso significa que não há razão para que x seja válido depois de criarmos a variável y . Em outras palavras, não há diferença entre cópias profundas e superficiais aqui; portanto, chamar clone não faria nada diferente da cópia superficial habitual e podemos deixá -lo de fora.
Então, que tipos implementam o traço Copy ? Você pode verificar a documentação para o tipo fornecido para ter certeza, mas como regra geral, qualquer grupo de valores escalares simples pode implementar Copy e nada que requer alocação ou seja alguma forma de recurso pode implementar Copy . Aqui estão alguns dos tipos que implementam cópia:
• Todos os tipos inteiros, como u32 . • O tipo booleano, bool , com valores true e false . • Todos os tipos de pontos de flutuação, como f64 . • O tipo de personagem, char . • Tuplas, se eles contêm apenas tipos que também implementam Copy . Por exemplo, ( i32 , i32 ) implementa Copy , mas ( i32 , String ) não.
A propriedade de uma variável segue o mesmo padrão sempre: atribuir um valor a outra variável move -a. Quando uma variável que inclui dados no heap fica fora do escopo, o valor será limpo por queda, a menos que a propriedade dos dados tenha sido movida para outra variável.
Referências e empréstimos
Uma referência é como um ponteiro, pois é um endereço que podemos seguir para acessar os dados armazenados nesse endereço; Esses dados pertencem a alguma outra variável. Ao contrário de um ponteiro, é garantida que uma referência aponte para um valor válido de um tipo específico para a vida útil dessa referência.
fn main() {
let s1 = String::from("hello");
let len = calculate_length(&s1);
println!("The length of '{}' is {}.", s1, len);
}
fn calculate_length(s: &String) -> usize {
s.len()
}
Primeiro, observe que todo o código da tupla na declaração variável e o valor de retorno da função se foi. Segundo, observe que passamos &s1 para calculate_length e, em sua definição, pegamos &String em vez de String . Esses ampeiros e representam referências e permitem que você se consulte a algum valor sem se apropriar dele.
NOTA: O oposto de referência usando
&é desreferenciando, que é realizado com o operador de dereferência,*.
Chamamos a ação de criar um empréstimo de referência. Como na vida real, se uma pessoa é dona de algo, você pode emprestá -la deles. Quando terminar, você deve devolvê -lo. Você não é o proprietário.
Então, o que acontece se tentarmos modificar algo que estamos emprestando
// !! Compile time Error !!
fn main() {
let s = String::from("hello");
change(&s);
}
fn change(some_string: &String) {
some_string.push_str(", world");
}
Assim como as variáveis são imutáveis por padrão, também são referências. Não temos permissão para modificar algo ao qual temos uma referência.
Referências mutáveis
As referências mutáveis têm uma grande restrição: se você tiver uma referência mutável a um valor, não poderá não ter outras referências a esse valor. Este código que tenta criar duas referências mutáveis a s falhará:
let mut s = String::from("hello");
let r1 = &mut s;
let r2 = &mut s;
println!("{}, {}", r1, r2);
A restrição que impede várias referências mutáveis aos mesmos dados ao mesmo tempo permite mutação, mas de maneira muito controlada. É algo com o qual os novos Rustaceans lutam, porque a maioria das línguas deixa você se transformar sempre que quiser. O benefício de ter essa restrição é que a ferrugem pode impedir as corridas de dados no momento da compilação. Uma corrida de dados é semelhante a uma condição de corrida e acontece quando esses três comportamentos ocorrem:
• Dois ou mais ponteiros acessam os mesmos dados ao mesmo tempo. • Pelo menos um dos ponteiros está sendo usado para escrever nos dados. • Não há mecanismo sendo usado para sincronizar o acesso aos dados.
Como sempre, podemos usar colchetes encaracolados para criar um novo escopo, permitindo várias referências mutáveis, mas não simultâneas:
let mut s = String::from("hello");
{
let r1 = &mut s;
} // r1 goes out of scope here, so we can make a new reference with no problems.
let r2 = &mut s;
A ferrugem aplica uma regra semelhante para combinar referências mutáveis e imutáveis. Este código resulta em um erro:
let mut s = String::from("hello");
let r1 = &s; // no problem
let r2 = &s; // no problem
let r3 = &mut s; // BIG PROBLEM
println!("{}, {}, and {}", r1, r2, r3);
Ufa! Também não podemos ter uma referência mutável enquanto temos um imutável para o mesmo valor.
Os usuários de uma referência imutável não esperam que o valor mude de repente de baixo deles! No entanto, várias referências imutáveis são permitidas porque ninguém que está apenas lendo os dados tem a capacidade de afetar a leitura dos dados por mais ninguém.
Observe que o escopo de uma referência começa de onde é introduzido e continua pela última vez em que a referência é usada. Por exemplo, esse código será compilado porque o último uso das referências imutáveis, o println! , ocorre antes que a referência mutável seja introduzida:
let mut s = String::from("hello");
let r1 = &s; // no problem
let r2 = &s; // no problem
println!("{} and {}", r1, r2);
// variables r1 and r2 will not be used after this point
let r3 = &mut s; // no problem
println!("{}", r3);
Os escopos das referências imutáveis r1 e r2 terminam após a println! onde eles são usados pela última vez, que é antes da referência mutável r3 ser criada. Esses escopos não se sobrepõem, então esse código é permitido. A capacidade do compilador de dizer que uma referência não está mais sendo usada em um ponto antes que o final do escopo seja chamado de vida útil não lexical (NLL para abreviação),
Fatias de string
Uma fatia de string é uma referência a parte de uma string, e parece esta:
let s = String::from("hello world");
let hello = &s[0..5];
let world = &s[6..11];
Em vez de uma referência a toda a String , hello é uma referência a uma parte da String , especificada no bit extra [0..5] . Criamos fatias usando um intervalo dentro de colchetes especificando [starting_index..ENDEN_Index] , onde starting_index é a primeira posição na fatia e ending_index é uma mais que a última posição na fatia. Internamente, a estrutura de dados da fatia armazena a posição inicial e o comprimento da fatia, que corresponde a ending_index menos starting_index . Então, no caso de let world = &s[6..11]; , o mundo seria uma fatia que contém um ponteiro para o byte no índice 6 de s com um valor de comprimento de 5 .
Com a sintaxe do alcance da RURS .. Se você deseja começar no índice zero, poderá abandonar o valor antes dos dois períodos. Em outras palavras, estes são iguais:
let s = String::from("hello");
let slice = &s[0..2];
let slice = &s[..2];
Da mesma forma, se sua fatia incluir o último byte da string, você poderá soltar o número à direita. Isso significa que eles são iguais:
let s = String::from("hello");
let len = s.len();
let slice = &s[3..len];
let slice = &s[3..];
Você também pode soltar os dois valores para pegar uma fatia de toda a string. Então, estes são iguais:
let s = String::from("hello");
let len = s.len();
let slice = &s[0..len];
let slice = &s[..];
Estruturas definidas e instanciais
fn build_user(email: String, username: String) -> User {
User {
email: email,
username: username,
active: true,
sign_in_count: 1,
}
}
Faz sentido nomear os parâmetros de função com o mesmo nome que os campos da estrutura, mas ter que repetir o email e os nomes e variáveis do campo de nome de usuário é um pouco tedioso. Se a estrutura tivesse mais campos, repetir cada nome ficaria ainda mais irritante.
Usando o campo init taquigrafia
Como os nomes dos parâmetros e os nomes do campo da estrutura são exatamente os mesmos no último exemplo, podemos usar a sintaxe da abreviação de Init Init para reescrever build_user para que ele se comporte exatamente o mesmo, mas não tem a repetição de email e nome de usuário
fn build_user(email: String, username: String) -> User {
User {
email,
username,
active: true,
sign_in_count: 1,
}
}
Aqui, estamos criando uma nova instância da estrutura User , que possui um email chamado de campo. Queremos definir o valor do campo de email para o valor no parâmetro de email da função build_user . Como o campo de email e o parâmetro de email têm o mesmo nome, precisamos apenas escrever email em vez de email: email .
Criando instâncias de outras instâncias com a Struct Update Syntax
Muitas vezes, é útil criar uma nova instância de uma estrutura que inclua a maioria dos valores de outra instância, mas altera alguns. Você pode fazer isso usando a Struct Update Syntax.
fn main() {
// --snip--
let user2 = User {
active: user1.active,
username: user1.username,
email: String::from("[email protected]"),
sign_in_count: user1.sign_in_count,
};
}
Usando a sintaxe de atualização da estrutura, podemos obter o mesmo efeito com menos código, conforme mostrado no último exemplo. A sintaxe ..
fn main() {
// --snip--
let user2 = User {
email: String::from("[email protected]"),
..user1
};
}
O código no último exemplo também cria uma instância no user2 que possui um valor diferente para email , mas possui os mesmos valores para o username , active e sign_in_count campos do user1 . O ..user1 deve vir por último para especificar que quaisquer campos restantes devem obter seus valores dos campos correspondentes no user1 , mas podemos optar por especificar valores para tantos campos quanto desejarmos em qualquer ordem, independentemente da ordem dos filmes na definição da estrutura.
Nota: que a sintaxe da atualização da estrutura usa = como uma atribuição; Isso ocorre porque move os dados, assim como vimos na seção “Variáveis e dados de maneiras interage: mover”. Neste exemplo, não podemos mais usar
user1após a criaçãouser2porque a sequência no campo de nome de usuário douser1foi movida parauser2. Se tivéssemos fornecidouser2novos valores da string parausernamee, portanto, usou apenas os valores ativos esign_in_countdouser1,user1ainda seria válido após a criaçãouser2. Os tipos deactiveesign_in_countsão tipos que implementam a característicaCopy; portanto, o comportamento que discutimos na seção “Dados somente para pilha: cópia” se aplicaria.
Usando estruturas de tupla sem campos nomeados para criar diferentes tipos
A ferrugem também suporta estruturas que se parecem com tuplas, chamadas estruturas de tupla. As estruturas de tupla têm o significado adicional que o nome da estrutura fornece, mas não tem nomes associados aos seus campos; Em vez disso, eles apenas têm os tipos de campos. As estruturas de tupla são úteis quando você deseja dar a toda a tupla um nome e fazer da tupla um tipo diferente de outras tuplas, e ao nomear cada campo como em uma estrutura regular seria detalhado ou redundante.
struct Color(i32, i32, i32);
struct Point(i32, i32, i32);
fn main() {
let black = Color(0, 0, 0);
let origin = Point(0, 0, 0);
}
Estruturas semelhantes a unidades sem campos
Você também pode definir estruturas que não têm nenhum campo! Elas são chamadas de estruturas semelhantes a unidades porque se comportam de maneira semelhante a () , o tipo de unidade que mencionamos na seção "O tipo de tupla". As estruturas semelhantes a unidades podem ser úteis quando você precisa implementar uma característica em algum tipo, mas não possui dados que você deseja armazenar no próprio tipo.
struct AlwaysEqual;
fn main() {
let subject = AlwaysEqual;
}
Sintaxe do método
Methods are similar to functions: we declare them with the fn keyword and a name, they can have parameters and a return value, and they contain some code that's run when the method is called from somewhere else. Unlike functions, methods are defined within the context of a struct (or an enum or a trait object), and their first parameter is always self , which represents the instance of the struct the method is being called on.
struct Rectangle {
width: u32,
height: u32,
}
impl Rectangle {
fn area(&self) -> u32 {
self.width * self.height
}
}
fn main() {
let rect1 = Rectangle {
width: 30,
height: 50,
};
println!(
"The area of the rectangle is {} square pixels.",rect1.area()
);
}
To define the function within the context of Rectangle , we start an impl (implementation) block for Rectangle . Everything within this impl block will be associated with the Rectangle type. Then we move the area function within the impl curly brackets and change the first (and in this case, only) parameter to be self in the signature and everywhere within the body. In main , where we called the area function and passed rect1 as an argument, we can instead use method syntax to call the area method on our Rectangle instance. The method syntax goes after an instance: we add a dot followed by the method name, parentheses, and any arguments.
In the signature for area , we use &self instead of rectangle: &Rectangle . The &self is actually short for self: &Self . Within an impl block, the type Self is an alias for the type that the impl block is for. Methods must have a parameter named self of type Self for their first parameter, so Rust lets you abbreviate this with only the name self in the first parameter spot. Note that we still need to use the & in front of the self shorthand to indicate this method borrows the Self instance, just as we did in rectangle: &Rectangle . Methods can take ownership of self , borrow self immutably as we've done here, or borrow self mutably, just as they can any other parameter.
We've chosen &self here for the same reason we used &Rectangle in the function version: we don't want to take ownership, and we just want to read the data in the struct, not write to it. If we wanted to change the instance that we've called the method on as part of what the method does, we'd use &mut self as the first parameter. Having a method that takes ownership of the instance by using just self as the first parameter is rare; this technique is usually used when the method transforms self into something else and you want to prevent the caller from using the original instance after the transformation.
Note that we can choose to give a method the same name as one of the struct's fields. For example, we can define a method on Rectangle also named width :
impl Rectangle {
fn width(&self) -> bool {
self.width > 0
}
}
Often, but not always, when we give methods with the same name as a field we want it to only return the value in the field and do nothing else. Methods like this are called getters , and Rust does not implement them automatically for struct fields as some other languages do. Getters are useful because you can make the field private but the method public and thus enable read-only access to that field as part of the type's public API.
Associated Functions
All functions defined within an impl block are called associated functions because they're associated with the type named after the impl . We can define associated functions that don't have self as their first parameter (and thus are not methods) because they don't need an instance of the type to work with. We've already used one function like this: the String::from function that's defined on the String type.
Associated functions that aren't methods are often used for constructors that will return a new instance of the struct. These are often called new , but new isn't a special name and isn't built into the language. For example, we could choose to provide an associated function named square that would have one dimension parameter and use that as both width and height, thus making it easier to create a square Rectangle rather than having to specify the same value twice:
impl Rectangle {
fn square(size: u32) -> Self {
Self {
width: size,
height: size,
}
}
}
The Self keywords in the return type and in the body of the function are aliases for the type that appears after the impl keyword, which in this case is Rectangle .
To call this associated function, we use the :: syntax with the struct name; let sq = Rectangle::square(3); is an example. This function is namespaced by the struct: the :: syntax is used for both associated functions and namespaces created by modules.
Multiple impl Blocks
Each struct is allowed to have multiple impl blocks.
impl Rectangle {
fn area(&self) -> u32 {
self.width * self.height
}
}
impl Rectangle {
fn can_hold(&self, other: &Rectangle) -> bool {
self.width > other.width && self.height > other.height
}
}
Enums and Pattern Matching
Enums allow you to define a type by enumerating its possible variants.
enum IpAddrKind {
V4,
V6,
}
IpAddrKind is now a custom data type that we can use elsewhere in our code.
Enum Values We can create instances of each of the two variants of IpAddrKind like this:
let four = IpAddrKind::V4;
let six = IpAddrKind::V6;
Note that the variants of the enum are namespaced under its identifier, and we use a double colon to separate the two. This is useful because now both values IpAddrKind::V4 and IpAddrKind::V6 are of the same type: IpAddrKind . We can then, for instance, define a function that takes any IpAddrKind :
fn route(ip_kind: IpAddrKind) {}
And we can call this function with either variant:
route(IpAddrKind::V4);
route(IpAddrKind::V6);
However, representing the same concept using just an enum is more concise: rather than an enum inside a struct, we can put data directly into each enum variant. This new definition of the IpAddr enum says that both V4 and V6 variants will have associated String values:
enum IpAddr {
V4(String),
V6(String),
}
let home = IpAddr::V4(String::from("127.0.0.1"));
let loopback = IpAddr::V6(String::from("::1"));
There's another advantage to using an enum rather than a struct : each variant can have different types and amounts of associated data. Version four type IP addresses will always have four numeric components that will have values between 0 and 255 . If we wanted to store V4 addresses as four u8 values but still express V6 addresses as one String value, we wouldn't be able to with a struct. Enums handle this case with ease:
enum IpAddr {
V4(u8, u8, u8, u8),
V6(String),
}
let home = IpAddr::V4(127, 0, 0, 1);
let loopback = IpAddr::V6(String::from("::1"));
struct Ipv4Addr {
// --snip--
}
struct Ipv6Addr {
// --snip--
}
enum IpAddr {
V4(Ipv4Addr),
V6(Ipv6Addr),
}
This code illustrates that you can put any kind of data inside an enum variant: strings, numeric types, or structs, for example. You can even include another enum! Also, standard library types are often not much more complicated than what you might come up with.
enum Message {
Quit,
Move { x: i32, y: i32 },
Write(String),
ChangeColor(i32, i32, i32),
}
Defining an enum with variants such as the ones in last example is similar to defining different kinds of struct definitions, except the enum doesn't use the struct keyword and all the variants are grouped together under the Message type. The following structs could hold the same data that the preceding enum variants hold:
struct QuitMessage; // unit struct
struct MoveMessage {
x: i32,
y: i32,
}
struct WriteMessage(String); // tuple struct
struct ChangeColorMessage(i32, i32, i32); // tuple struct
But if we used the different structs, which each have their own type, we couldn't as easily define a function to take any of these kinds of messages as we could with the Message enum
There is one more similarity between enums and structs: just as we're able to define methods on structs using impl , we're also able to define methods on enums. Here's a method named call that we could define on our Message enum:
impl Message {
fn call(&self) {
// method body would be defined here
}
}
let m = Message::Write(String::from("hello"));
m.call();
The Option Enum and Its Advantages Over Null Values
This section explores a case study of Option , which is another enum defined by the standard library. The Option type encodes the very common scenario in which a value could be something or it could be nothing.
Programming language design is often thought of in terms of which features you include, but the features you exclude are important too. Rust doesn't have the null feature that many other languages have. Null is a value that means there is no value there. In languages with null , variables can always be in one of two states: null or not-null.
The problem isn't really with the concept but with the particular implementation. As such, Rust does not have nulls, but it does have an enum that can encode the concept of a value being present or absent. This enum is Option<T> , and it is defined by the standard library as follows:
enum Option<T> {
None,
Some(T),
}
In short, because Option<T> and T (where T can be any type) are different types, the compiler won't let us use an Option<T> value as if it were definitely a valid value. For example, this code won't compile because it's trying to add an i8 to an Option<i8> :
// !! NOT COMPILE !!
let x: i8 = 5;
let y: Option<i8> = Some(5);
let sum = x + y;
In other words, you have to convert an Option<T> to a T before you can perform T operations with it. Generally, this helps catch one of the most common issues with null: assuming that something isn't null when it actually is.
Eliminating the risk of incorrectly assuming a not-null value helps you to be more confident in your code. In order to have a value that can possibly be null, you must explicitly opt in by making the type of that value Option<T> . Then, when you use that value, you are required to explicitly handle the case when the value is null. Everywhere that a value has a type that isn't an Option<T> , you can safely assume that the value isn't null. This was a deliberate design decision for Rust to limit null's pervasiveness and increase the safety of Rust code.
The match Control Flow Construct
Rust has an extremely powerful control flow construct called match that allows you to compare a value against a series of patterns and then execute code based on which pattern matches. Patterns can be made up of literal values, variable names, wildcards, and many other things; The power of match comes from the expressiveness of the patterns and the fact that the compiler confirms that all possible cases are handled.
Think of a match expression as being like a coin-sorting machine: coins slide down a track with variously sized holes along it, and each coin falls through the first hole it encounters that it fits into. In the same way, values go through each pattern in a match , and at the first pattern the value “fits,” the value falls into the associated code block to be used during execution.
enum Coin {
Penny,
Nickel,
Dime,
Quarter,
}
fn value_in_cents(coin: Coin) -> u8 {
match coin {
Coin::Penny => 1,
Coin::Nickel => 5,
Coin::Dime => 10,
Coin::Quarter => 25,
}
}
This seems very similar to an expression used with if , but there's a big difference: with if , the expression needs to return a Boolean value, but here, it can return any type.
Patterns that Bind to Values
Another useful feature of match arms is that they can bind to the parts of the values that match the pattern. This is how we can extract values out of enum variants.
Catch-all Patterns and the _ Placeholder
Using enums, we can also take special actions for a few particular values, but for all other values take one default action. Imagine we're implementing a game where, if you roll a 3 on a dice roll, your player doesn't move, but instead gets a new fancy hat. If you roll a 7, your player loses a fancy hat. For all other values, your player moves that number of spaces on the game board. Here's a match that implements that logic, with the result of the dice roll hardcoded rather than a random value, and all other logic represented by functions without bodies because actually implementing them is out of scope for this example:
let dice_roll = 9;
match dice_roll {
3 => add_fancy_hat(),
7 => remove_fancy_hat(),
other => move_player(other),
}
fn add_fancy_hat() {}
fn remove_fancy_hat() {}
fn move_player(num_spaces: u8) {}
For the first two arms, the patterns are the literal values 3 and 7. For the last arm that covers every other possible value, the pattern is the variable we've chosen to name other . The code that runs for the other arm uses the variable by passing it to the move_player function.
Rust also has a pattern we can use when we want a catch-all but don't want to use the value in the catch-all pattern: _ is a special pattern that matches any value and does not bind to that value. This tells Rust we aren't going to use the value, so Rust won't warn us about an unused variable. Let's change the rules of the game: now, if you roll anything other than a 3 or a 7, you must roll again. We no longer need to use the catch-all value, so we can change our code to use _ instead of the variable named other :
let dice_roll = 9;
match dice_roll {
3 => add_fancy_hat(),
7 => remove_fancy_hat(),
_ => reroll(),
}
fn add_fancy_hat() {}
fn remove_fancy_hat() {}
fn reroll() {}
This example also meets the exhaustiveness requirement because we're explicitly ignoring all other values in the last arm; we haven't forgotten anything. Finally, we'll change the rules of the game one more time, so that nothing else happens on your turn if you roll anything other than a 3 or a 7. We can express that by using the unit value (the empty tuple type we mentioned in “The Tuple Type” section) as the code that goes with the _ arm:
let dice_roll = 9;
match dice_roll {
3 => add_fancy_hat(),
7 => remove_fancy_hat(),
_ => (),
}
fn add_fancy_hat() {}
fn remove_fancy_hat() {}
Here, we're telling Rust explicitly that we aren't going to use any other value that doesn't match a pattern in an earlier arm, and we don't want to run any code in this case.
Concise Control Flow with if let
The if let syntax lets you combine if and let into a less verbose way to handle values that match one pattern while ignoring the rest. Consider the program in example that matches on an Option<u8> value in the config_max variable but only wants to execute code if the value is the Some variant.
let config_max = Some(3u8);
match config_max {
Some(max) => println!("The maximum is configured to be {}", max),
_ => (),
}
If the value is Some , we print out the value in the Some variant by binding the value to the variable max in the pattern. We don't want to do anything with the None value. To satisfy the match expression, we have to add _ => () after processing just one variant, which is annoying boilerplate code to add.
Instead, we could write this in a shorter way using if let . The following code behaves the same as the match
let config_max = Some(3u8);
if let Some(max) = config_max {
println!("The maximum is configured to be {}", max);
}
The syntax if let takes a pattern and an expression separated by an equal sign. It works the same way as a match , where the expression is given to the match and the pattern is its first arm. In this case, the pattern is Some(max) , and the max binds to the value inside the Some . We can then use max in the body of the if let block in the same way as we used max in the corresponding match arm. The code in the if let block isn't run if the value doesn't match the pattern.
Using if let means less typing, less indentation, and less boilerplate code. However, you lose the exhaustive checking that match enforces. Choosing between match and if let depends on what you're doing in your particular situation and whether gaining conciseness is an appropriate trade-off for losing exhaustive checking.
In other words, you can think of if let as syntax sugar for a match that runs code when the value matches one pattern and then ignores all other values.
Packages and Crates
A crate is the smallest amount of code that the Rust compiler considers at a time. Even if you run rustc rather than cargo and pass a single source code file, the compiler considers that file to be a crate. Crates can contain modules, and the modules may be defined in other files that get compiled with the crate, as we'll see in the coming sections.
A crate can come in one of two forms: a binary crate or a library crate. Binary crates are programs you can compile to an executable that you can run, such as a command-line program or a server. Each must have a function called main that defines what happens when the executable runs. All the crates we've created so far have been binary crates.
Library crates don't have a main function, and they don't compile to an executable. Instead, they define functionality intended to be shared with multiple projects.
A package can contain as many binary crates as you like, but at most only one library crate. A package must contain at least one crate, whether that's a library or binary crate.
Grouping Related Code in Modules
Modules let us organize code within a crate for readability and easy reuse. Modules also allow us to control the privacy of items, because code within a module is private by default. Private items are internal implementation details not available for outside use. We can choose to make modules and the items within them public, which exposes them to allow external code to use and depend on them.
Earlier, we mentioned that src/main.rs and src/lib.rs are called crate roots. The reason for their name is that the contents of either of these two files form a module named crate at the root of the crate's module structure, known as the module tree.
Paths for Referring to an Item in the Module Tree
To show Rust where to find an item in a module tree, we use a path in the same way we use a path when navigating a filesystem. To call a function, we need to know its path.
A path can take two forms: • An absolute path is the full path starting from a crate root; for code from an external crate, the absolute path begins with the crate name, and for code from the current crate, it starts with the literal crate . • A relative path starts from the current module and uses self , super , or an identifier in the current module.
Both absolute and relative paths are followed by one or more identifiers separated by double colons :: .
mod front_of_house {
mod hosting {
fn add_to_waitlist() {}
}
}
pub fn eat_at_restaurant() {
// Absolute path
crate::front_of_house::hosting::add_to_waitlist();
// Relative path
front_of_house::hosting::add_to_waitlist();
}
The first time we call the add_to_waitlist function in eat_at_restaurant , we use an absolute path. The add_to_waitlist function is defined in the same crate as eat_at_restaurant , which means we can use the crate keyword to start an absolute path. We then include each of the successive modules until we make our way to add_to_waitlist . You can imagine a filesystem with the same structure: we'd specify the path /front_of_house/hosting/add_to_waitlist to run the add_to_waitlist program; using the crate name to start from the crate root is like using / to start from the filesystem root in your shell.
The second time we call add_to_waitlist in eat_at_restaurant , we use a relative path. The path starts with front_of_house , the name of the module defined at the same level of the module tree as eat_at_restaurant . Here the filesystem equivalent would be using the path front_of_house/hosting/add_to_waitlist . Starting with a module name means that the path is relative.
Items in a parent module can't use the private items inside child modules, but items in child modules can use the items in their ancestor modules. This is because child modules wrap and hide their implementation details, but the child modules can see the context in which they're defined. To continue with our metaphor, think of the privacy rules as being like the back office of a restaurant: what goes on in there is private to restaurant customers, but office managers can see and do everything in the restaurant they operate.
Rust chose to have the module system function this way so that hiding inner implementation details is the default. That way, you know which parts of the inner code you can change without breaking outer code. However, Rust does give you the option to expose inner parts of child modules' code to outer ancestor modules by using the pub keyword to make an item public.
Exposing Paths with the pub Keyword
In the absolute path, we start with crate , the root of our crate's module tree. The front_of_house module is defined in the crate root. While front_of_house isn't public, because the eat_at_restaurant function is defined in the same module as front_of_house (that is, eat_at_restaurant and front_of_house are siblings), we can refer to front_of_house from eat_at_restaurant . Next is the hosting module marked with pub . We can access the parent
module of hosting , so we can access hosting . Finally, the add_to_waitlist function is marked with pub and we can access its parent module, so this function call works!
In the relative path, the logic is the same as the absolute path except for the first step: rather than starting from the crate root, the path starts from front_of_house . The front_of_house module is defined within the same module as eat_at_restaurant , so the relative path starting from the module in which eat_at_restaurant is defined works. Then, because hosting and add_to_waitlist are marked with pub , the rest of the path works, and this function call is valid!
Best Practices for Packages with a Binary and a Library
We mentioned a package can contain both a src/main.rs binary crate root as well as a src/lib.rs library crate root, and both crates will have the package name by default. Typically, packages with this pattern of containing both a library and a binary crate will have just enough code in the binary crate to start an executable that calls code with the library crate. This lets other projects benefit from the most functionality that the package provides, because the library crate's code can be shared.
The module tree should be defined in src/lib.rs . Then, any public items can be used in the binary crate by starting paths with the name of the package. The binary crate becomes a user of the library crate just like a completely external crate would use the library crate: it can only use the public API. This helps you design a good API; not only are you the author, you're also a client!
Starting Relative Paths with super
We can construct relative paths that begin in the parent module, rather than the current module or the crate root, by using super at the start of the path. This is like starting a filesystem path with the .. syntax. Using super allows us to reference an item that we know is in the parent module, which can make rearranging the module tree easier when the module is closely related to the parent, but the parent might be moved elsewhere in the module tree someday.
Consider the code in Listing 7-8 that models the situation in which a chef fixes an incorrect order and personally brings it out to the customer. The function fix_incorrect_order defined in the back_of_house module calls the function deliver_order defined in the parent module by specifying the path to deliver_order starting with super :
fn deliver_order() {}
mod back_of_house {
fn fix_incorrect_order() {
cook_order();
super::deliver_order();
}
fn cook_order() {}
}
The fix_incorrect_order function is in the back_of_house module, so we can use super to go to the parent module of back_of_house , which in this case is crate , the root. From there, we look for deliver_order and find it. Sucesso! We think the back_of_house module and the deliver_order function are likely to stay in the same relationship to each other and get moved together should we decide to reorganize the crate's module tree. Therefore, we used super so we'll have fewer places to update code in the future if this code gets moved to a different module.
Making Structs and Enums Public
We can also use pub to designate structs and enums as public, but there are a few details extra to the usage of pub with structs and enums. If we use pub before a struct definition, we make the struct public, but the struct's fields will still be private. We can make each field public or not on a case-by-case basis. In example, we've defined a public back_of_house::Breakfast struct with a public toast field but a private seasonal_fruit field. This models the case in a restaurant where the customer can pick the type of bread that comes with a meal, but the chef decides which fruit accompanies the meal based on what's in season and in stock. The available fruit changes quickly, socustomers can't choose the fruit or even see which fruit they'll get.
mod back_of_house {
pub struct Breakfast {
pub toast: String,
seasonal_fruit: String,
}
impl Breakfast {
pub fn summer(toast: &str) -> Breakfast {
Breakfast {
toast: String::from(toast),
seasonal_fruit: String::from("peaches"),
}
}
}
}
pub fn eat_at_restaurant() {
// Order a breakfast in the summer with Rye toast
let mut meal = back_of_house::Breakfast::summer("Rye");
// Change our mind about what bread we'd like
meal.toast = String::from("Wheat");
println!("I'd like {} toast please", meal.toast);
// The next line won't compile if we uncomment it; we're not allowed
// to see or modify the seasonal fruit that comes with the meal
// meal.seasonal_fruit = String::from("blueberries");
In contrast, if we make an enum public, all of its variants are then public. We only need the pub before the enum keyword,
Because we made the Appetizer enum public, we can use the Soup and Salad variants in eat_at_restaurant . Enums aren't very useful unless their variants are public; it would be annoying to have to annotate all enum variants with pub in every case, so the default for enum variants is to be public. Structs are often useful without their fields being public, so struct fields follow the general rule of everything being private by default unless annotated with pub .
Bringing Paths into Scope with the use Keyword
Having to write out the paths to call functions can feel inconvenient and repetitive. whether we chose the absolute or relative path to the add_to_waitlist function, every time we wanted to call add_to_waitlist we had to specify front_of_house and hosting too. Fortunately, there's a way to simplify this process: we can create a shortcut to a path with the use keyword once, and then use the shorter name everywhere else in the scope. we bring the crate::front_of_house::hosting module into the scope of the eat_at_restaurant function so we only have to specify hosting::add_to_waitlist to call the add_to_waitlist function in eat_at_restaurant .
mod front_of_house {
pub mod hosting {
pub fn add_to_waitlist() {}
}
}
use crate::front_of_house::hosting;
pub fn eat_at_restaurant() {
hosting::add_to_waitlist();
}
Adding use and a path in a scope is similar to creating a symbolic link in the filesystem. By adding use crate::front_of_house::hosting in the crate root, hosting is now a valid name in that scope, just as though the hosting module had been defined in the crate root. Paths brought into scope with use also check privacy, like any other paths.
Providing New Names with the as Keyword
There's another solution to the problem of bringing two types of the same name into the same scope with use : after the path, we can specify as and a new local name, or alias, for the type.
use std::fmt::Result;
use std::io::Result as IoResult;
fn function1() -> Result {
// --snip--
}
Re-exporting Names with pub use
When we bring a name into scope with the use keyword, the name available in the new scope is private. To enable the code that calls our code to refer to that name as if it had been defined in that code's scope, we can combine pub and use . This technique is called re-exporting because we're bringing an item into scope but also making that item available for others to bring into their scope.
mod front_of_house {
pub mod hosting {
pub fn add_to_waitlist() {}
}
}
pub use crate::front_of_house::hosting;
pub fn eat_at_restaurant() {
hosting::add_to_waitlist();
}
Using Nested Paths to Clean Up Large use Lists If we're using multiple items defined in the same crate or same module, listing each item on its own line can take up a lot of vertical space in our files.
// --snip--
use std::cmp::Ordering;
use std::io;
// --snip--
Instead, we can use nested paths to bring the same items into scope in one line. We do this by specifying the common part of the path, followed by two colons, and then curly brackets around a list of the parts of the paths that differ,
// --snip--
use std::{cmp::Ordering, io};
// --snip
We can use a nested path at any level in a path, which is useful when combining two use statements that share a subpath.
use std::io;
use std::io::Write;
The common part of these two paths is std::io , and that's the complete first path. To merge these two paths into one use statement, we can use self in the nested path,
use std::io::{self, Write};
This line brings std::io and std::io::Write into scope.
The Glob Operator
If we want to bring all public items defined in a path into scope, we can specify that path followed by the * glob operator:
use std::collections::*;
This use statement brings all public items defined in std::collections into the current scope. Be careful when using the glob operator! Glob can make it harder to tell what names are in scope and where a name used in your program was defined.
Common Collections
Unlike the built- in array and tuple types, the data these collections point to is stored on the heap, which means the amount of data does not need to be known at compile time and can grow or shrink as the program runs.
• A vector allows you to store a variable number of values next to each other.
• A string is a collection of characters. We've mentioned the String type previously, but in this chapter we'll talk about it in depth.
• A hash map allows you to associate a value with a particular key. It's a particular implementation of the more general data structure called a map.
Creating a New Vector
let v: Vec<i32> = Vec::new();
Note that we added a type annotation here. Because we aren't inserting any values into this vector, Rust doesn't know what kind of elements we intend to store. This is an important point. Vectors are implemented using generics;
let v = vec![1, 2, 3];
Updating a Vector
To create a vector and then add elements to it, we can use the push method,
let mut v = Vec::new();
v.push(5);
v.push(6);
v.push(7);
v.push(8);
Reading Elements of Vectors
There are two ways to reference a value stored in a vector: via indexing or using the get method. In the following examples, we've annotated the types of the values that are returned from these functions for extra clarity.
let v = vec![1, 2, 3, 4, 5];
let third: &i32 = &v[2];
println!("The third element is {}", third);
let third: Option<&i32> = v.get(2);
match third {
Some(third) => println!("The third element is {}", third),
None => println!("There is no third element."),
}
Note a few details here. We use the index value of 2 to get the third element because vectors are indexed by number, starting at zero. Using & and [] gives us a reference to the element at the index value. When we use the get method with the index passed as an argument, we get an Option<&T> that we can use with match .
The reason Rust provides these two ways to reference an element is so you can choose how the program behaves when you try to use an index value outside the range of existing elements. As an example, let's see what happens when we have a vector of five elements and then we try to access an element at index 100 with each technique,
let v = vec![1, 2, 3, 4, 5];
let does_not_exist = &v[100];
let does_not_exist = v.get(100);
When we run this code, the first [] method will cause the program to panic because it references a nonexistent element. This method is best used when you want your program to crash if there's an attempt to access an element past the end of the vector.
When the get method is passed an index that is outside the vector, it returns None without panicking. You would use this method if accessing an element beyond the range of the vector may happen occasionally under normal circumstances.
Using an Enum to Store Multiple Types
Vectors can only store values that are the same type.
For example, say we want to get values from a row in a spreadsheet in which some of the columns in the row contain integers, some floating-point numbers, and some strings. We can define an enum whose variants will hold the different value types, and all the enum variants will be considered the same type: that of the enum. Then we can create a vector to hold that enum and so, ultimately, holds different types.
enum SpreadsheetCell {
Int(i32),
Float(f64),
Text(String),
}
let row = vec![
SpreadsheetCell::Int(3),
SpreadsheetCell::Text(String::from("blue")),
SpreadsheetCell::Float(10.12),
];
Rust needs to know what types will be in the vector at compile time so it knows exactly how much memory on the heap will be needed to store each element. We must also be explicit about what types are allowed in this vector. If Rust allowed a vector to hold any type, there would be a chance that one or more of the types would cause errors with the operations performed on the elements of the vector. Using an enum plus a match expression means that Rust will ensure at compile time that every possible case is handled
If you don't know the exhaustive set of types a program will get at runtime to store in a vector, the enum technique won't work. Instead, you can use a trait object,
What Is a String? The String type, which is provided by Rust's standard library rather than coded into the core language, is a growable, mutable, owned, UTF-8 encoded string type. When Rustaceans refer to “strings” in Rust, they might be referring to either the String or the string slice &str types, not just one of those types. Although this section is largely about String , both types are used heavily in Rust's standard library, and both String and string slices are UTF-8 encoded.
Creating a New String
Many of the same operations available with Vec<T> are available with String as well, because String is actually implemented as a wrapper around a vector of bytes with some extra guarantees, restrictions, and capabilities.
let mut s = String::new();
This line creates a new empty string called s , which we can then load data into. Often, we'll have some initial data that we want to start the string with. For that, we use the to_string method, which is available on any type that implements the Display trait,
Bytes and Scalar Values and Grapheme Clusters! Oh meu Deus!
Another point about UTF-8 is that there are actually three relevant ways to look at strings from Rust's perspective: as bytes, scalar values, and grapheme clusters (the closest thing to what we would call letters).
If we look at the Hindi word “नम�ते” written in the Devanagari script, it is stored as a vector of u8 values that looks like this:
[224, 164, 168, 224, 164, 174, 224, 164, 184, 224, 165, 141, 224, 164, 164,
224, 165, 135]
That's 18 bytes and is how computers ultimately store this data. If we look at them as Unicode scalar values, which are what Rust's char type is, those bytes look like this:
['न', 'म', 'स', '◌्', 'त', '◌े']
There are six char values here, but the fourth and sixth are not letters: they're diacritics that don't make sense on their own. Finally, if we look at them as grapheme clusters, we'd get what a person would call the four letters that make up the Hindi word:
["न", "म", "स्", "ते"]
Rust provides different ways of interpreting the raw string data that computers store so that each program can choose the interpretation it needs, no matter what human language the data is in.
A final reason Rust doesn't allow us to index into a String to get a character is that indexing operations are expected to always take constant time (O(1)). But it isn't possible to guarantee that performance with a String , because Rust would have to walk through the contents from the beginning to the index to determine how many valid characters there were.
Slicing Strings
Rather than indexing using [ ] with a single number, you can use [] with a range to create a string slice containing particular bytes:
Methods for Iterating Over Strings
The best way to operate on pieces of strings is to be explicit about whether you want characters or bytes. For individual Unicode scalar values, use the chars method. Calling chars on “Зд” separates out and returns two values of type char , and you can iterate over the result to access each element:
for c in "Зд".chars() {
println!("{}", c);
}
This code will print the following:
З
д
Alternatively, the bytes method returns each raw byte, which might be appropriate for your domain:
for b in "Зд".bytes() {
println!("{}", b);
}
This code will print the four bytes that make up this string:
208
151
208
180
But be sure to remember that valid Unicode scalar values may be made up of more than 1 byte . Getting grapheme clusters from strings as with the Devanagari script is complex, so this functionality is not provided by the standard library.
Storing Keys with Associated Values in Hash Maps The last of our common collections is the hash map. The type HashMap<K, V> stores a mapping of keys of type K to values of type V using a hashing function, which determines how it places these keys and values into memory. Many programming languages support this kind of data structure, but they often use a different name, such as hash, map, object, hash table, dictionary, or associative array, just to name a few.
Updating a Hash Map
Although the number of key and value pairs is growable, each unique key can only have one value associated with it at a time (but not vice versa: for example, both the Blue team and the Yellow team could have value 10 stored in the scores hash map).
When you want to change the data in a hash map, you have to decide how to handle the case when a key already has a value assigned. You could replace the old value with the new value, completely disregarding the old value. You could keep the old value and ignore the new value, only adding the new value if the key doesn't already have a value. Or you could combine the old value and the new value. Let's look at how to do each of these!
Adding a Key and Value Only If a Key Isn't Present
Hash maps have a special API for this called entry that takes the key you want to check as a parameter. The return value of the entry method is an enum called Entry that represents a value that might or might not exist. Let's say we want to check whether the key for the Yellow team has a value associated with it. If it doesn't, we want to insert the value 50 , and the same for the Blue team.
use std::collections::HashMap;
let mut scores = HashMap::new();
scores.insert(String::from("Blue"), 10);
scores.entry(String::from("Yellow")).or_insert(50);
scores.entry(String::from("Blue")).or_insert(50);
println!("{:?}", scores);
Rust groups errors into two major categories: recoverable and unrecoverable errors. For a recoverable error, such as a file not found error, we most likely just want to report the problem to the user and retry the operation. Unrecoverable errors are always symptoms of bugs, like trying to access a location beyond the end of an array, and so we want to immediately stop the program.
Most languages don't distinguish between these two kinds of errors and handle both in the same way, using mechanisms such as exceptions. Rust doesn't have exceptions. Instead, it has the type Result<T, E> for recoverable errors and the panic! macro that stops execution when the program encounters an unrecoverable error. This chapter covers calling panic! first and then talks about returning Result<T, E> values.
Unwinding the Stack or Aborting in Response to a Panic
By default, when a panic occurs, the program starts unwinding, which means Rust walks back up the stack and cleans up the data from each function it encounters. However, this walking back and cleanup is a lot of work. Rust, therefore, allows you to choose the alternative of immediately aborting, which ends the program without cleaning up.
Memory that the program was using will then need to be cleaned up by the operating system. If in your project you need to make the resulting binary as small as possible, you can switch from unwinding to aborting upon a panic by adding panic = 'abort' to the appropriate [profile] sections in your Cargo.toml file. For example, if you want to abort on panic in release mode, add this:
[profile.release]
panic = 'abort'
Using a panic! Backtrace
A backtrace is a list of all the functions that have been called to get to this point. Backtraces in Rust work as they do in other languages: the key to reading the backtrace is to start from the top and read until you see files you wrote. That's the spot where the problem originated. The lines above that spot are code that your code has called; the lines below are code that called your code. These before-and-after lines might include core Rust code, standard library code, or crates that you're using. Let's try getting a backtrace by setting the RUST_BACKTRACE environment variable to any value except 0.
$ RUST_BACKTRACE=1 cargo run
thread 'main' panicked at 'index out of bounds: the len is 3 but the index is 99',
src/main.rs:4:5 stack backtrace:
0: rust_begin_unwindat
/rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/std/src/panicking.rs:483
1: core::panicking::panic_fmt at
/rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/core/src/panicking.rs:85
2: core::panicking::panic_bounds_check at
/rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/core/src
Recoverable Errors with Result
enum Result<T, E> {
Ok(T),
Err(E),
}
The T and E are generic type parameters:
Matching on Different Errors
use std::fs::File;
use std::io::ErrorKind;
fn main() {
let greeting_file_result = File::open("hello.txt");
let greeting_file = match greeting_file_result {
Ok(file) => file,
Err(error) => match error.kind() {
ErrorKind::NotFound => match File::create("hello.txt") {
Ok(fc) => fc,
Err(e) => panic!("Problem creating the file: {:?}", e),
},
other_error => {
panic!("Problem opening the file: {:?}", other_error);
}
},
};
}
Propagating Errors
When a function's implementation calls something that might fail, instead of handling the error within the function itself, you can return the error to the calling code so that it can decide what to do. This is known as propagating the error and gives more control to the calling code, where there might be more information or logic that dictates how the error should be handled than what you have available in the context of your code.
A Shortcut for Propagating Errors: the ? Operador
use std::fs::File;
use std::io;
use std::io::Read;
fn read_username_from_file() -> Result<String, io::Error> {
let mut username_file = File::open("hello.txt")?;
let mut username = String::new();
username_file.read_to_string(&mut username)?;
Ok(username)
}
O ? placed after a Result value is defined to work in almost the same way as the match expressions we defined to handle the Result values . If the value of the Result is an Ok , the value inside the Ok will get returned from this expression, and the program will continue. If the value is an Err , the Err will be returned from the whole function as if we had used the return keyword so the error value gets propagated to the calling code.
There is a difference between what the match expression from last example does and what the ? operator does: error values that have the ? operator called on them go through the from function, defined in the From trait in the standard library, which is used to convert values from one type into another. When the ? operator calls the from function, the error type received is converted into the error type defined in the return type of the current function. This is useful when a function returns one error type to represent all the ways a function might fail, even if parts might fail for many different reasons.
Where The ? Operator Can Be Used
O ? operator can only be used in functions whose return type is compatible with the value the ? is used on. This is because the ? operator is defined to perform an early return of a value out of the function, in the same manner as the match expression
This error points out that we're only allowed to use the ? operator in a function that returns Result , Option , or another type that implements From Residual .
Note that you can use the ? operator on a Result in a function that returns Result , and you can use the ? operator on an Option in a function that returns Option , but you can't mix and match. O ? operator won't automatically convert a Result to an Option or vice versa; in those cases, you can use methods like the ok method on Result or the ok_or method on Option to do the conversion explicitly.
When a main function returns a Result<(), E> , the executable will exit with a value of 0 if main returns Ok(()) and will exit with a nonzero value if main returns an Err value. Executables written in C return integers when they exit: programs that exit successfully return the integer 0 , and programs that error return some integer other than 0 . Rust also returns integers from executables to be compatible with this convention.
The main function may return any types that implement the std::process::Termination trait, which contains a function report that returns an ExitCode Consult the standard library documentation for more information on implementing the Termination trait for your own types.
Generic Types, Traits, and Lifetimes
Every programming language has tools for effectively handling the duplication of concepts. In Rust, one such tool is generics: abstract stand-ins for concrete types or other properties. We can express the behavior of generics or how they relate to other generics without knowing what will be in their place when compiling and running the code.
struct Point<T> {
x: T,
y: T,
}
impl<T> Point<T> {
fn x(&self) -> &T {
&self.x
}
}
Traits: Defining Shared Behavior
A trait defines functionality a particular type has and can share with other types. We can use traits to define shared behavior in an abstract way. We can use trait bounds to specify that a generic type can be any type that has certain behavior.
Note: Traits are similar to a feature often called interfaces in other languages, although with some differences.
pub trait Summary {
fn summarize(&self) -> String;
}
Here, we declare a trait using the trait keyword and then the trait's name, which is Summary in this case. We've also declared the trait as pub so that crates depending on this crate can make use of this trait too, as we'll see in a few examples. Inside the curly brackets, we declare the method signatures that describe the behaviors of the types that implement this trait, which in this case is fn summarize(&self) -> String .
A trait can have multiple methods in its body: the method signatures are listed one per line and each line ends in a semicolon.
Implementing a Trait on a Type
pub struct NewsArticle {
pub headline: String,
pub location: String,
pub author: String,
pub content: String,
}
impl Summary for NewsArticle {
fn summarize(&self) -> String {
format!("{}, by {} ({})", self.headline, self.author, self.location)
}
}
pub struct Tweet {
pub username: String,
pub content: String,
pub reply: bool,
pub retweet: bool,
}
impl Summary for Tweet {
fn summarize(&self) -> String {
format!("{}: {}", self.username, self.content)
}
}
Now that the library has implemented the Summary trait on NewsArticle and Tweet , users of the crate can call the trait methods on instances of NewsArticle and Tweet in the same way we call regular methods. The only difference is that the user must bring the trait into scope as well as the types.
use aggregator::{Summary, Tweet};
fn main() {
let tweet = Tweet {
username: String::from("horse_ebooks"),
content: String::from(
"of course, as you probably already know, people",
),
reply: false,
retweet: false,
};
println!("1 new tweet: {}", tweet.summarize());
}
Other crates that depend on the aggregator crate can also bring the Summary trait into scope to implement Summary on their own types. One restriction to note is that we can implement a trait on a type only if at least one of the trait or the type is local to our crate. For example, we can implement standard library traits like Display on a custom type like Tweet as part of our aggregator crate functionality, because the type Tweet is local to our aggregator crate. We can also implement Summary on Vec<T> in our aggregator crate, because the trait Summary is local to our aggregator crate.
But we can't implement external traits on external types. For example, we can't implement the Display trait on Vec<T> within our aggregator crate, because Display and Vec<T> are both defined in the standard library and aren't local to our aggregator crate. This restriction is part of a property called coherence, and more specifically the orphan rule, so named because the parent type is not present. This rule ensures that other people's code can't break your code and vice versa. Without the rule, two crates could implement the same trait for the same type, and Rust wouldn't know which implementation to use.
Default Implementations
Sometimes it's useful to have default behavior for some or all of the methods in a trait instead of requiring implementations for all methods on every type. Then, as we implement the trait on a particular type, we can keep or override each method's default behavior.
pub trait Summary {
fn summarize(&self) -> String {
String::from("(Read more...)")
}
}
To use a default implementation to summarize instances of NewsArticle , we specify an empty impl block with
impl Summary for NewsArticle {}
Default implementations can call other methods in the same trait, even if those other methods don't have a default implementation. In this way, a trait can provide a lot of useful functionality and only require implementors to specify a small part of it. For example, we could define the Summary trait to have a summarize_author method whose implementation is required, and then define a summarize method that has a default implementation that calls the summarize_author method:
pub trait Summary {
fn summarize_author(&self) -> String;
fn summarize(&self) -> String {
format!("(Read more from {}...)", self.summarize_author())
}
}
Note that it isn't possible to call the default implementation from an overriding implementation of that same method.
Traits as Parameters
pub fn notify(item: &impl Summary) {
println!("Breaking news! {}", item.summarize());
}
Instead of a concrete type for the item parameter, we specify the impl keyword and the trait name. This parameter accepts any type that implements the specified trait. In the body of notify , we can call any methods on item that come from the Summary trait, such as summarize . We can call notify and pass in any instance of NewsArticle or Tweet . Code that calls the function with any other type, such as a String or an i32 , won't compile because those types don't implement Summary .
Trait Bound Syntax
The impl Trait syntax works for straightforward cases but is actually syntax sugar for a longer form known as a trait bound; it looks like this:
pub fn notify<T: Summary>(item: &T) {
println!("Breaking news! {}", item.summarize());
}
This longer form is equivalent to the example in the previous section but is more verbose. We place trait bounds with the declaration of the generic type parameter after a colon and inside angle brackets.
The impl Trait syntax is convenient and makes for more concise code in simple cases, while the fuller trait bound syntax can express more complexity in other cases. For example, we can have two parameters that implement Summary . Doing so with the impl Trait syntax looks like this:
pub fn notify(item1: &impl Summary, item2: &impl Summary) {
Using impl Trait is appropriate if we want this function to allow item1 and item2 to have different types (as long as both types implement Summary ). If we want to force both parameters to have the same type, however, we must use a trait bound, like this:
pub fn notify<T: Summary>(item1: &T, item2: &T) {
The generic type T specified as the type of the item1 and item2 parameters constrains the function such that the concrete type of the value passed as an argument for item1 and item2 must be the same.
Specifying Multiple Trait Bounds with the + Syntax
We can also specify more than one trait bound. Say we wanted notify to use display formatting as well as summarize on item : we specify in the notify definition that item must implement both Display and Summary . We can do so using the + syntax:
pub fn notify(item: &(impl Summary + Display)) {
The + syntax is also valid with trait bounds on generic types:
pub fn notify<T: Summary + Display>(item: &T) {
With the two trait bounds specified, the body of notify can call summarize and use {} to format item .
Clearer Trait Bounds with where Clauses
Using too many trait bounds has its downsides. Each generic has its own trait bounds, so functions with multiple generic type parameters can contain lots of trait bound information between the function's name and its parameter list, making the function signature hard to read. For this reason, Rust has alternate syntax for specifying trait bounds inside a where clause after the function signature. So instead of writing this:
fn some_function<T: Display + Clone, U: Clone + Debug>(t: &T, u: &U) -> i32 {
we can use a where clause, like this:
fn some_function<T, U>(t: &T, u: &U) -> i32
where T: Display + Clone,
U: Clone + Debug
{
Returning Types that Implement Traits
We can also use the impl Trait syntax in the return position to return a value of some type that implements a trait, as shown here:
fn returns_summarizable() -> impl Summary {
Tweet {
username: String::from("horse_ebooks"),
content: String::from(
"of course, as you probably already know, people",
),
reply: false,
retweet: false,
}
}
The ability to specify a return type only by the trait it implements is especially useful in the context of closures and iterators. Closures and iterators create types that only the compiler knows or types that are very long to specify. The impl Trait syntax lets you concisely specify that a function returns some type that implements the Iterator trait without needing to write out a very long type.
However, you can only use impl Trait if you're returning a single type.
Using Trait Bounds to Conditionally Implement Methods
use std::fmt::Display;
struct Pair<T> {
x: T,
y: T,
}
impl<T> Pair<T> {
fn new(x: T, y: T) -> Self {
Self { x, y }
}
}
impl<T: Display + PartialOrd> Pair<T> {
fn cmp_display(&self) {
if self.x >= self.y {
println!("The largest member is x = {}", self.x);
} else {
println!("The largest member is y = {}", self.y);
}
}
}
We can also conditionally implement a trait for any type that implements another trait. Implementations of a trait on any type that satisfies the trait bounds are called blanket implementations and are extensively used in the Rust standard library.
Lifetime annotations don't change how long any of the references live. Rather, they describe the relationships of the lifetimes of multiple references to each other without affecting the lifetimes. Just as functions can accept any type when the signature specifies a generic type parameter, functions can accept references with any lifetime by specifying a generic lifetime parameter.
Lifetime annotations have a slightly unusual syntax: the names of lifetime parameters must start with an apostrophe ( ' ) and are usually all lowercase and very short, like generic types. Most people use the name 'a for the first lifetime annotation. We place lifetime parameter annotations after the & of a reference, using a space to separate the annotation from the reference's type.
&i32 // a reference
&'a i32 // a reference with an explicit lifetime
&'a mut i32 // a mutable reference with an explicit lifetime
Lifetime Annotations in Function Signatures
To use lifetime annotations in function signatures, we need to declare the generic lifetime parameters inside angle brackets between the function name and the parameter list, just as we did with generic type parameters.
fn longest<'a>(x: &'a str, y: &'a str) -> &'a str {
if x.len() > y.len() {
x
} else {
y
}
}
The function signature now tells Rust that for some lifetime 'a , the function takes two parameters, both of which are string slices that live at least as long as lifetime 'a . The function signature also tells Rust that the string slice returned from the function will live at least as long as lifetime 'a . In practice, it means that the lifetime of the reference returned by the longest function is the same as the smaller of the lifetimes of the values referred to by the function arguments.
Remember, when we specify the lifetime parameters in this function signature, we're not changing the lifetimes of any values passed in or returned. Rather, we're specifying that the borrow checker should reject any values that don't adhere to these constraints. Note that the longest function doesn't need to know exactly how long x and y will live, only that some scope can be substituted for 'a that will satisfy this signature.
When annotating lifetimes in functions, the annotations go in the function signature, not in the function body. The lifetime annotations become part of the contract of the function, much like the types in the signature. Having function signatures contain the lifetime contract means the analysis the Rust compiler does can be simpler. If there's a problem with the way a function is annotated or the way it is called, the compiler errors can point to the part of our code and the constraints more precisely. If, instead, the Rust compiler made more inferences about what we intended the relationships of the lifetimes to be, the compiler might only be able to point to a use of our code many steps away from the cause of the problem.
When we pass concrete references to longest , the concrete lifetime that is substituted for 'a is the part of the scope of x that overlaps with the scope of y . In other words, the generic lifetime 'a will get the concrete lifetime that is equal to the smaller of the lifetimes of x and y . Because we've annotated the returned reference with the same lifetime parameter 'a , the returned reference will also be valid for the length of the smaller of the lifetimes of x and y .
Ultimately, lifetime syntax is about connecting the lifetimes of various parameters and return values of functions. Once they're connected, Rust has enough information to allow memory-safe operations and disallow operations that would create dangling pointers or otherwise violate memory safety.
Lifetime Annotations in Struct Definitions
So far, the structs we've defined all hold owned types. We can define structs to hold references, but in that case we would need to add a lifetime annotation on every reference in the struct's definition.
struct ImportantExcerpt<'a> {
part: &'a str,
}
This struct has the single field part that holds a string slice, which is a reference. As with generic data types, we declare the name of the generic lifetime parameter inside angle brackets after the name of the struct so we can use the lifetime parameter in the body of the struct definition. This annotation means an instance of Important Excerpt can't outlive the reference it holds in its part field.
After writing a lot of Rust code, the Rust team found that Rust programmers were entering the same lifetime annotations over and over in particular situations. These situations were predictable and followed a few deterministic patterns. The developers programmed these patterns into the compiler's code so the borrow checker could infer the lifetimes in these situations and wouldn't need explicit annotations.
This piece of Rust history is relevant because it's possible that more deterministic patterns will emerge and be added to the compiler. In the future, even fewer lifetime annotations might be required.
The patterns programmed into Rust's analysis of references are called the lifetime elision rules. These aren't rules for programmers to follow; they're a set of particular cases that the compiler will consider, and if your code fits these cases, you don't need to write the lifetimes explicitly.
Lifetimes on function or method parameters are called input lifetimes, and lifetimes on return values are called output lifetimes.
The compiler uses three rules to figure out the lifetimes of the references when there aren't explicit annotations. The first rule applies to input lifetimes, and the second and third rules apply to output lifetimes. If the compiler gets to the end of the three rules and there are still references for which it can't figure out lifetimes, the compiler will stop with an error.
The first rule is that the compiler assigns a lifetime parameter to each parameter that's a reference. In other words, a function with one parameter gets one lifetime parameter: fn foo<'a>(x: &'a i32) ; a function with two parameters gets two separate lifetime parameters: fn foo<'a, 'b>(x: &'a i32, y: &'b i32) ; e assim por diante.
The second rule is that, if there is exactly one input lifetime parameter, that lifetime is assigned to all output lifetime parameters: fn foo<'a>(x: &'a i32) -> &'a i32 .
The third rule is that, if there are multiple input lifetime parameters, but one of them is &self or &mut self because this is a method, the lifetime of self is assigned to all output lifetime parameters. This third rule makes methods much nicer to read and write because fewer symbols are necessary.
Because the third rule really only applies in method signatures, we'll look at lifetimes in that context next to see why the third rule means we don't have to annotate lifetimes in method signatures very often.
Lifetime Annotations in Method Definitions
Lifetime names for struct fields always need to be declared after the impl keyword and then used after the struct's name, because those lifetimes are part of the struct's type.
In method signatures inside the impl block, references might be tied to the lifetime of references in the struct's fields, or they might be independent. In addition, the lifetime elision rules often make it so that lifetime annotations aren't necessary in method signatures.
impl<'a> ImportantExcerpt<'a> {
fn level(&self) -> i32 {
3
}
}
The Static Lifetime
One special lifetime we need to discuss is 'static , which denotes that the affected reference can live for the entire duration of the program.
let s: &'static str = "I have a static lifetime.";
The text of this string is stored directly in the program's binary, which is always available. Therefore, the lifetime of all string literals is 'static .
You might see suggestions to use the 'static lifetime in error messages. But before specifying 'static as the lifetime for a reference, think about whether the reference you have actually lives the entire lifetime of your program or not, and whether you want it to. Most of the time, an error message suggesting the 'static lifetime results from attempting to create a dangling reference or a mismatch of the available lifetimes. In such cases, the solution is fixing those problems, not specifying the 'static lifetime.
Generic Type Parameters, Trait Bounds, and Lifetimes Together
use std::fmt::Display;
fn longest_with_an_announcement<'a, T>(
x: &'a str,
y: &'a str,
ann: T,
) -> &'a str
where
T: Display,
{
println!("Announcement! {}", ann);
if x.len() > y.len() {
x
} else {
y
}
}
Checking for Panics with should_panic
We do this by adding the attribute should_panic to our test function. The test passes if the code inside the function panics; the test fails if the code inside the function doesn't panic.
pub struct Guess {
value: i32,
}
impl Guess {
pub fn new(value: i32) -> Guess {
if value < 1 || value > 100 {
panic!("Guess value must be between 1 and 100, got {}.", value);
}
Guess { value }
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
#[should_panic]
fn greater_than_100() {
Guess::new(200);
}
}
Tests that use should_panic can be imprecise. A should_panic test would pass even if the test panics for a different reason from the one we were expecting. To make should_panic tests more precise, we can add an optional expected parameter to the should_panic attribute. The test harness will make sure that the failure message contains the provided text. For example, consider the modified code for Guess, where the new function panics with different messages depending on whether the value is too small or too large.
// --snip--
#[cfg(test)]
mod tests {
use super::*;
#[test]
#[should_panic(expected = "less than or equal to 100")]
fn greater_than_100() {
Guess::new(200);
}
}
Using Result<T, E> in Tests
#[cfg(test)]
mod tests {
#[test]
fn it_works() -> Result<(), String> {
if 2 + 2 == 4 {
Ok(())
} else {
Err(String::from("two plus two does not equal four"))
}
}
}
Writing tests so they return a Result<T, E> enables you to use the question mark operator in the body of tests, which can be a convenient way to write tests that should fail if any operation within them returns an Err variant.
You can't use the #[should_panic] annotation on tests that use Result<T, E> . To assert that an operation returns an Err variant, don't use the question mark operator on the Result<T, E> value. Instead, use assert!(value.is_err()) .
Running Tests in Parallel or Consecutively
When you run multiple tests, by default they run in parallel using threads, meaning they finish running faster and you get feedback quicker. Because the tests are running at the same time, you must make sure your tests don't depend on each other or on any shared state, including a shared environment, such as the current working directory or environment variables.
If you don't want to run the tests in parallel or if you want more fine-grained control over the number of threads used, you can send the --test-threads flag and the number of threads you want to use to the test binary. Take a look at the following example:
$ cargo test -- --test-threads=1
We set the number of test threads to 1 , telling the program not to use any parallelism. Running the tests using one thread will take longer than running them in parallel, but the tests won't interfere with each other if they share state.
Showing Function Output
By default, if a test passes, Rust's test library captures anything printed to standard output. For example, if we call println! in a test and the test passes, we won't see the println! output in the terminal; we'll see only the line that indicates the test passed. If a test fails, we'll see whatever was printed to standard output with the rest of the failure message.
If we want to see printed values for passing tests as well, we can tell Rust to also show the output of successful tests with --show-output .
$ cargo test -- --show-output
Running Single Tests
We can pass the name of any test function to cargo test to run only that test:
$ cargo test one_hundred
Filtering to Run Multiple Tests
We can specify part of a test name, and any test whose name matches that value will be run. For example, because two of our tests' names contain add , we can run those two by running cargo test add :
$ cargo test add
Ignoring Some Tests Unless Specifically Requested
Sometimes a few specific tests can be very time-consuming to execute, so you might want to exclude them during most runs of cargo test . Rather than listing as arguments all tests you do want to run, you can instead annotate the time-consuming tests using the ignore attribute to exclude them, as shown here:
#[test]
fn it_works() {
assert_eq!(2 + 2, 4);
}
#[test]
#[ignore]
fn expensive_test() {
// code that takes an hour to run
}
The expensive_test function is listed as ignored . If we want to run only the ignored tests, we can use cargo test -- --ignored :
$ cargo test -- --ignored
If you want to run all tests whether they're ignored or not, you can run
$ cargo test -- --include-ignored
Test Organization
As mentioned at the start of the chapter, testing is a complex discipline, and different people use different terminology and organization. The Rust community thinks about tests in terms of two main categories: unit tests and integration tests. Unit tests are small and more focused, testing one module in isolation at a time, and can test private interfaces. Integration tests are entirely external to your library and use your code in the same way any other external code would, using only the public interface and potentially exercising multiple modules per test.
Unit Tests
The purpose of unit tests is to test each unit of code in isolation from the rest of the code to quickly pinpoint where code is and isn't working as expected. You'll put unit tests in the src directory in each file with the code that they're testing. The convention is to create a module named tests in each file to contain the test functions and to annotate the module with cfg(test)
The Tests Module and #[cfg(test)]
The #[cfg(test)] annotation on the tests module tells Rust to compile and run the test code only when you run cargo test , not when you run cargo build . This saves compile time when you only want to build the library and saves space in the resulting compiled artifact because the tests are not included. You'll see that because integration tests go in a different directory, they don't need the #[cfg(test)] annotation. However, because unit tests go in the same files as the code, you'll use #[cfg(test)] to specify that they shouldn't be included in the compiled result.
Integration Tests
In Rust, integration tests are entirely external to your library. They use your library in the same way any other code would, which means they can only call functions that are part of your library's public API.
The tests Directory
We create a tests directory at the top level of our project directory, next to src . Cargo knows to look for integration test files in this directory. We can then make as many test files as we want, and Cargo will compile each of the files as an individual crate.
Each file in the tests directory is a separate crate, so we need to bring our library into each test crate's scope. For that reason we add use adder at the top of the code, which we didn't need in the unit tests.
We don't need to annotate any code in tests/integration_test.rs with #[cfg(test)] . Cargo treats the tests directory specially and compiles files in this directory only when we run cargo test .
Integration Tests for Binary Crates
If our project is a binary crate that only contains a src/main.rs file and doesn't have a src/lib.rs file, we can't create integration tests in the tests directory and bring functions defined in the src/main.rs fileminto scope with a use statement. Only library crates expose functions that other crates can use; binary crates are meant to be run on their own.
This is one of the reasons Rust projects that provide a binary have a straightforward src/main.rs file that calls logic that lives in the src/lib.rs file. Using that structure, integration tests can test the library crate with use to make the important functionality available. If the important functionality works, the small amount of code in the src/main.rs file will work as well, and that small amount of code doesn't need to be tested.
Rust's closures are anonymous functions you can save in a variable or pass as arguments to other functions. You can create the closure in one place and then call the closure elsewhere to evaluate it in a different context. Unlike functions, closures can capture values from the scope in which they're defined.
Closure Type Inference and Annotation
There are more differences between functions and closures. Closures don't usually require you to annotate the types of the parameters or the return value like fn functions do. Type annotations are required on functions because the types are part of an explicit interface exposed to your users. Defining this interface rigidly is important for ensuring that everyone agrees on what types of values a function uses and returns. Closures, on the other hand, aren't used in an exposed interface like this: they're stored in variables and used without naming them and exposing them to users of our library.
Closures are typically short and relevant only within a narrow context rather than in any arbitrary scenario. Within these limited contexts, the compiler can infer the types of the parameters and the return type, similar to how it's able to infer the types of most variables (there are rare cases where the compiler needs closure type annotations too).
let expensive_closure = |num: u32| -> u32 {
println!("calculating slowly...");
thread::sleep(Duration::from_secs(2));
num
};
let example_closure = |x| x;
let s = example_closure(String::from("hello"));
let n = example_closure(5);
The first time we call example_closure with the String value, the compiler infers the type of x and the return type of the closure to be String . Those types are then locked into the closure in example_closure , and we get a type error when we next try to use a different type with the same closure.
Capturing References or Moving Ownership
Closures can capture values from their environment in three ways, which directly map to the three ways a function can take a parameter: borrowing immutably, borrowing mutably, and taking ownership. The closure will decide which of these to use based on what the body of the function does with the captured values.
If you want to force the closure to take ownership of the values it uses in the environment even though the body of the closure doesn't strictly need ownership, you can use the move keyword before the parameter list. This technique is mostly useful when passing a closure to a new thread to move the data so that it's owned by the new thread.
Moving Captured Values Out of Closures and the Fn Traits
Once a closure has captured a reference or captured ownership of a value where the closure is defined (thus affecting what, if anything, is moved into the closure), the code in the body of the closure defines what happens to the references or values when the closure is evaluated later (thus affecting what, if anything, is moved out of the closure). A closure body can do any of the following: move a captured value out of the closure, mutate the captured value, neither move nor mutate the value, or capture nothing from the environment to begin with.
The way a closure captures and handles values from the environment affects which traits the closure implements, and traits are how functions and structs can specify what kinds of closures they can use. Closures will automatically implement one, two, or all three of these Fn traits, in an additive fashion:
FnOnce applies to closures that can be called at least once. All closures implement at least this trait, because all closures can be called. A closure that moves captured values out of its body will only implement FnOnce and none of the other Fn traits, because it can only be called once.
FnMut applies to closures that don't move captured values out of their body, but that might mutate the captured values. These closures can be called more than once.
Fn applies to closures that don't move captured values out of their body and that don't mutate captured values, as well as closures that capture nothing from their environment. These closures can be called more than once without mutating their environment, which is important in cases such as calling a closure multiple times concurrently.
The Iterator Trait and the next Method
The iter method produces an iterator over immutable references. If we want to create an iterator that takes ownership of value and returns owned values, we can call into _iter instead of iter . Similarly, if we want to iterate over mutable references, we can call iter_mut instead of iter .
Methods that Consume the Iterator
Methods that call next are called consuming adaptors , because calling them uses up the iterator. One example is the sum method, which takes ownership of the iterator and iterates through the items by repeatedly calling next , thus consuming the iterator. As it iterates through, it adds each item to a running total and returns the total when iteration is complete.
fn iterator_sum() {
let v1 = vec![1, 2, 3];
let v1_iter = v1.iter();
let total: i32 = v1_iter.sum();
assert_eq!(total, 6);
}
We aren't allowed to use v1_iter after the call to sum because sum takes ownership of the iterator we call it on.
Methods that Produce Other Iterators
Iterator adaptors are methods defined on the Iterator trait that don't consume the iterator. Instead, they produce different iterators by changing some aspect of the original iterator.
Commonly Used Sections
We used the # Examples Markdown heading
• Panics: The scenarios in which the function being documented could panic. Callers of the function who don't want their programs to panic should make sure they don't call the function in these situations.
• Errors: If the function returns a Result , describing the kinds of errors that might occur and what conditions might cause those errors to be returned can be helpful to callers so they can write code to handle the different kinds of errors in different ways.
• Safety: If the function is unsafe to call, there should be a section explaining why the function is unsafe and covering the invariants that the function expects callers to uphold.
Commenting Contained Items
The style of doc comment //! adds documentation to the item that contains the comments rather than to the items following the comments. We typically use these doc comments inside the crate root file (src/lib.rs by convention) or inside a module to document the crate or the module as a whole.
For example, to add documentation that describes the purpose of the my_crate crate that contains the add_one function, we add documentation comments that start with //! to the beginning of the
//! # My Crate
//!
//! `my_crate` is a collection of utilities to make performing certain
//! calculations more convenient.
/// Adds one to the number given.
// --snip--
Notice there isn't any code after the last line that begins with //! . Because we started the comments with //! instead of /// , we're documenting the item that contains this comment rather than an item that follows this comment. In this case, that item is the src/lib.rs file, which is the crate root. These comments describe the entire crate.
When we run cargo doc --open , these comments will display on the front page of the documentation for my_crate above the list of public items in the crate,
Smart pointers are usually implemented using structs. Unlike an ordinary struct, smart pointers implement the Deref and Drop traits. The Deref trait allows an instance of the smart pointer struct to behave like a reference so you can write your code to work with either references or smart pointers. The Drop trait allows you to customize the code that's run when an instance of the smart pointer goes out of scope.
• Box<T> for allocating values on the heap
• Rc<T> , a reference counting type that enables multiple ownership
• Ref<T> and RefMut<T> , accessed through RefCell<T> , a type that enforces the borrowing rules at runtime instead of compile time
Using Box to Point to Data on the Heap
The most straightforward smart pointer is a box, whose type is written Box<T> . Boxes allow you to store data on the heap rather than the stack. What remains on the stack is the pointer to the heap data.
Boxes don't have performance overhead, other than storing their data on the heap instead of on the stack. But they don't have many extra capabilities either. You'll use them most often in these situations:
• When you have a type whose size can't be known at compile time and you want to use a value of that type in a context that requires an exact size • When you have a large amount of data and you want to transfer ownership but ensure the data won't be copied when you do so • When you want to own a value and you care only that it's a type that implements a particular trait rather than being of a specific type
The Box<T> type is a smart pointer because it implements the Deref trait, which allows Box<T> values to be treated like references. When a Box<T> value goes out of scope, the heap data that the box is pointing to is cleaned up as well because of the Drop trait implementation.
Treating Smart Pointers Like Regular References with the Deref Trait
Implementing the Deref trait allows you to customize the behavior of the dereference operator * (not to be confused with the multiplication or glob operator). By implementing Deref in such a way that a smart pointer can be treated like a regular reference, you can write code that operates on references and use that code with smart pointers too.
Implicit Deref Coercions with Functions and Methods
Deref coercion converts a reference to a type that implements the Deref trait into a reference to another type.
Deref coercion is a convenience Rust performs on arguments to functions and methods, and works only on types that implement the Dere f trait. It happens automatically when we pass a reference to a particular type's value as an argument to a function or method that doesn't match the parameter type in the function or method definition. A sequence of calls to the deref method converts the type we provided into the type the parameter needs.
Deref coercion was added to Rust so that programmers writing function and method calls don't need to add as many explicit references and dereferences with & and * . The deref coercion feature also lets us write more code that can work for either references or smart pointers.
When the Deref trait is defined for the types involved, Rust will analyze the types and use Deref::deref as many times as necessary to get a reference to match the parameter's type. The number of times that Deref::deref needs to be inserted is resolved at compile time, so there is no runtime penalty for taking advantage of deref coercion!
How Deref Coercion Interacts with Mutability
Similar to how you use the Deref trait to override the * operator on immutable references, you can use the DerefMut trait to override the * operator on mutable references.
Rust does deref coercion when it finds types and trait implementations in three cases:
• From &T to &U when T: Deref<Target=U>
• From &mut T to &mut U when T: DerefMut<Target=U>
• From &mut T to &U when T: Deref<Target=U>
The first two cases are the same as each other except that the second implements mutability. The first case states that if you have a &T , and T implements Deref to some type U , you can get a &U transparently. The second case states that the same deref coercion happens for mutable references.
The third case is trickier: Rust will also coerce a mutable reference to an immutable one. But the reverse is not possible: immutable references will never coerce to mutable references. Because of the borrowing rules, if you have a mutable reference, that mutable reference must be the only reference to that data (otherwise, the program wouldn't compile). Converting one mutable reference to one immutable reference will never break the borrowing rules. Converting an immutable reference to a mutable reference would require that the initial immutable reference is the only immutable reference to that data, but the borrowing rules don't guarantee that. Therefore, Rust can't make the assumption that converting an immutable reference to a mutable reference is possible.
Running Code on Cleanup with the Drop Trait
Drop , which lets you customize what happens when a value is about to go out of scope. You can provide an implementation for the Drop trait on any type, and that code can be used to release resources like files or network connections.
You specify the code to run when a value goes out of scope by implementing the Drop trait. The Drop trait requires you to implement one method named drop that takes a mutable reference to self .
Rust automatically called drop for us when our instances went out of scope, calling the code we specified. Variables are dropped in the reverse order of their creation, so d was dropped before c .
Dropping a Value Early with std::mem::drop
Unfortunately, it's not straightforward to disable the automatic drop functionality. Disabling drop isn't usually necessary; the whole point of the Drop trait is that it's taken care of automatically. Occasionally, however, you might want to clean up a value early. One example is when using smart pointers that manage locks: you might want to force the drop method that releases the lock so that other code in the same scope can acquire the lock. Rust doesn't let you call the Drop trait's drop method manually; instead you have to call the std::mem::drop function provided by the standard library if you want to force a value to be dropped before the end of its scope.
A destructor is analogous to a constructor, which creates an instance. The drop function in Rust is one particular destructor.
Rust doesn't let us call drop explicitly because Rust would still automatically call drop on the value at the end of main . This would cause a double free error because Rust would be trying to clean up the same value twice.
The std::mem::drop function is different from the drop method in the Drop trait. We call it by passing as an argument the value we want to force drop. The function is in the prelude, so we can modify main
fn main() {
let c = CustomSmartPointer {
data: String::from("some data"),
};
println!("CustomSmartPointer created.");
drop(c);
println!("CustomSmartPointer dropped before the end of main.");
}
Rc<T> , the Reference Counted Smart Pointer
You have to enable multiple ownership explicitly by using the Rust type Rc<T> , which is an abbreviation for reference counting. The Rc<T> type keeps track of the number of references to a value to determine whether or not the value is still in use. If there are zero references to a value, the value can be cleaned up without any references becoming invalid.
We use the Rc<T> type when we want to allocate some data on the heap for multiple parts of our program to read and we can't determine at compile time which part will finish using the data last. If we knew which part would finish last, we could just make that part the data's owner, and the normal ownership rules enforced at compile time would take effect.
RefCell<T> and the Interior Mutability Pattern
Interior mutability is a design pattern in Rust that allows you to mutate data even when there are immutable references to that data; normally, this action is disallowed by the borrowing rules. To mutate data, the pattern uses unsafe code inside a data structure to bend Rust's usual rules that govern mutation and borrowing. Unsafe code indicates to the compiler that we're checking the rules manually instead of relying on the compiler to check them for us;
We can use types that use the interior mutability pattern only when we can ensure that the borrowing rules will be followed at runtime, even though the compiler can't guarantee that. The unsafe code involved is then wrapped in a safe API, and the outer type is still immutable.
Enforcing Borrowing Rules at Runtime with RefCell<T>
Unlike Rc<T> , the RefCell<T> type represents single ownership over the data it holds.
With references and Box<T> , the borrowing rules' invariants are enforced at compile time. With RefCell<T> , these invariants are enforced at runtime. With references, if you break these rules, you'll get a compiler error. With RefCell<T> , if you break these rules, your program will panic and exit.
Here is a recap of the reasons to choose Box<T> , Rc<T> , or RefCell<T > :
• Rc<T> enables multiple owners of the same data; Box<T> and RefCell<T> have single owners.
• Box<T> allows immutable or mutable borrows checked at compile time; Rc<T> allows only immutable borrows checked at compile time; RefCell<T> allows immutable or mutable borrows checked at runtime.
• Because RefCell<T> allows mutable borrows checked at runtime, you can mutate the value inside the RefCell<T> even when the RefCell<T> is immutable.
Mutating the value inside an immutable value is the interior mutability pattern.
Keeping Track of Borrows at Runtime with RefCell<T>
When creating immutable and mutable references, we use the & and &mut syntax, respectively. With RefCell<T> , we use the borrow and borrow_mut methods, which are part of the safe API that belongs to RefCell<T> . The borrow method returns the smart pointer type Ref<T> , and borrow_mut returns the smart pointer type RefMut<T> . Both types implement Deref , so we can treat them like regular references.
The RefCell<T> keeps track of how many Ref<T> and RefMut<T> smart pointers are currently active. Every time we call borrow , the RefCell<T> increases its count of how many immutable borrows are active. When a Ref<T> value goes out of scope, the count of immutable borrows goes down by one. Just like the compile-time borrowing rules, RefCell<T> lets us have many immutable borrows or one mutable borrow at any point in time.
Reference Cycles Can Leak Memory
Rust's memory safety guarantees make it difficult, but not impossible, to accidentally create memory that is never cleaned up (known as a memory leak). Preventing memory leaks entirely is not one of Rust's guarantees, meaning memory leaks are memory safe in Rust. We can see that Rust allows memory leaks by using Rc<T> and RefCell<T> : it's possible to create references where items refer to each other in a cycle. This creates memory leaks because the reference count of each item in the cycle will never reach 0 , and the values will never be dropped.
Characteristics of Object-Oriented Languages
There is no consensus in the programming community about what features a language must have to be considered object-oriented. Rust is influenced by many programming paradigms, including OOP. Arguably, OOP languages share certain common characteristics, namely objects, encapsulation, and inheritance. Let's look at what each of those characteristics means and whether Rust supports it.
Encapsulation that Hides Implementation Details
Another aspect commonly associated with OOP is the idea of encapsulation, which means that the implementation details of an object aren't accessible to code using that object. Therefore, the only way to interact with an object is through its public API; code using the object shouldn't be able to reach into the object's internals and change data or behavior directly. This enables the programmer to change and refactor an object's internals without needing to change the code that uses the object.
we can use the pub keyword to decide which modules, types, functions, and methods in our code should be public, and by default everything else is private. For example, we can define a struct Averaged Collection that has a field
containing a vector of i32 values. The struct can also have a field that contains the average of the values in the vector, meaning the average doesn't have to be computed on demand whenever anyone needs it. In other words, AveragedCollection will cache the calculated average for us.
pub struct AveragedCollection {
list: Vec<i32>,
average: f64,
}
The struct is marked pub so that other code can use it, but the fields within the struct remain private. This is important in this case because we want to ensure that whenever a value is added or removed from the list, the average is also updated. We do this by implementing add , remove , and average methods on the struct,
If encapsulation is a required aspect for a language to be considered object-oriented, then Rust meets that requirement. The option to use pub or not for different parts of code enables encapsulation of implementation details.
Inheritance as a Type System and as Code Sharing
Inheritance is a mechanism whereby an object can inherit elements from another object's definition, thus gaining the parent object's data and behavior without you having to define them again.
If a language must have inheritance to be an object-oriented language, then Rust is not one. There is no way to define a struct that inherits the parent struct's fields and method implementations without using a macro.
However, if you're used to having inheritance in your programming toolbox, you can use other solutions in Rust, depending on your reason for reaching for inheritance in the first place.
You would choose inheritance for two main reasons. One is for reuse of code: you can implement particular behavior for one type, and inheritance enables you to reuse that implementation for a different type. You can do this in a limited way in Rust code using default trait method implementations,
The other reason to use inheritance relates to the type system: to enable a child type to be used in the same places as the parent type. This is also called polymorphism , which means that you can substitute multiple objects for each other at runtime if they share certain characteristics.
Polimorfismo
To many people, polymorphism is synonymous with inheritance. But it's actually a more general concept that refers to code that can work with data of multiple types. For inheritance, those types are generally subclasses. Rust instead uses generics to abstract over different possible types and trait bounds to impose constraints on what those types must provide. This is sometimes called bounded parametric polymorphism.
Conditional if let Expressions
if let expressions mainly as a shorter way to write the equivalent of a match that only matches one case. Optionally, if let can have a corresponding else containing code to run if the pattern in the if let doesn't match.
it's also possible to mix and match if let , else if , and else if let expressions. Doing so gives us more flexibility than a match expression in which we can express only one value to compare with the patterns. Also, Rust doesn't require that the conditions in a series of if let , else if , else if let arms relate to each other.
fn main() {
let favorite_color: Option<&str> = None;
let is_tuesday = false;
let age: Result<u8, _> = "34".parse();
if let Some(color) = favorite_color {
println!("Using your favorite color, {color}, as the background");
} else if is_tuesday {
println!("Tuesday is green day!");
} else if let Ok(age) = age {
if age > 30 {
println!("Using purple as the background color");
} else {
println!("Using orange as the background color");
}
} else {
println!("Using blue as the background color");
}
}
while let Conditional Loops
Similar in construction to if let , the while let conditional loop allows a while loop to run for as long as a pattern continues to match. A while let loop that uses a vector as a stack and prints the values in the vector in the opposite order in which they were pushed.
let mut stack = Vec::new();
stack.push(1);
stack.push(2);
stack.push(3);
while let Some(top) = stack.pop() {
println!("{}", top);
}
let Statements
Prior to this chapter, we had only explicitly discussed using patterns with match and if let , but in fact, we've used patterns in other places as well, including in let statements. For example, consider this straightforward variable assignment with let :
let x = 5;
Every time you've used a let statement like this you've been using patterns, although you might not have realized it! More formally, a let statement looks like this:
let PATTERN = EXPRESSION;
Refutability: Whether a Pattern Might Fail to Match
Patterns come in two forms: refutable and irrefutable. Patterns that will match for any possible value passed are irrefutable. An example would be x in the statement let x = 5; because x matches anything and therefore cannot fail to match. Patterns that can fail to match for some possible value are refutable. An example would be Some(x) in the expression if let Some(x) = a_value because if the value in the a_value variable is None rather than Some , the Some(x) pattern will not match.
Function parameters, let statements, and for loops can only accept irrefutable patterns, because the program cannot do anything meaningful when values don't match. The if let and while let expressions accept refutable and irrefutable patterns, but the compiler warns against irrefutable patterns because by definition they're intended to handle possible failure: the functionality of a conditional is in its ability to perform differently depending on success or failure.
Multiple Patterns
In match expressions, you can match multiple patterns using the | syntax, which is the pattern or operator. For example, in the following code we match the value of x against the match arms, the first of which has an or option, meaning if the value of x matches either of the values in that arm, that arm's code will run:
let x = 1;
match x {
1 | 2 => println!("one or two"),
3 => println!("three"),
_ => println!("anything"),
}
This code prints one or two .
Matching Ranges of Values with ..=
The ..= syntax allows us to match to an inclusive range of values. In the following code, when a pattern matches any of the values within the given range, that arm will execute:
let x = 5;
match x {
1..=5 => println!("one through five"),
_ => println!("something else"),
}
If x is 1, 2, 3, 4, or 5, the first arm will match. This syntax is more convenient for multiple match values than using the | operator to express the same idea; if we were to use | we would have to specify 1 | 2 | 3 | 4 | 5 . Specifying a range is much shorter, especially if we want
Destructuring Structs
struct Point {
x: i32,
y: i32,
}
fn main() {
let p = Point { x: 0, y: 7 };
let Point { x: a, y: b } = p;
assert_eq!(0, a);
assert_eq!(7, b);
}
------ ou ------
struct Point {
x: i32,
y: i32,
}
fn main() {
let p = Point { x: 0, y: 7 };
let Point { x, y } = p;
assert_eq!(0, x);
assert_eq!(7, y);
}
We can also destructure with literal values as part of the struct pattern rather than creating variables for all the fields. Doing so allows us to test some of the fields for particular values while creating variables to destructure the other fields. we have a match expression that separates Point values into three cases: points that lie directly on the x axis (which is true when y = 0 ), on the y axis ( x = 0 ), or neither.
fn main() {
let p = Point { x: 0, y: 7 };
match p {
Point { x, y: 0 } => println!("On the x axis at {}", x),
Point { x: 0, y } => println!("On the y axis at {}", y),
Point { x, y } => println!("On neither axis: ({}, {})", x, y),
}
}
The first arm will match any point that lies on the x axis by specifying that the y field matches if its value matches the literal 0 . The pattern still creates an x variable that we can use in the code for this arm.
Similarly, the second arm matches any point on the y axis by specifying that the x field matches if its value is 0 and creates a variable y for the value of the y field. The third arm doesn't specify any literals, so it matches any other Point and creates variables for both the x and y fields.
In this example, the value p matches the second arm by virtue of x containing a 0, so this code will print On the y axis at 7 . Remember that a match expression stops checking arms once it has found the first matching pattern, so even though Point { x: 0, y: 0} is on the x axis and the y axis, this code would only print On the x axis at 0 .
Ignoring Remaining Parts of a Value with ..
With values that have many parts, we can use the .. syntax to use specific parts and ignore the rest, avoiding the need to list underscores for each ignored value. The .. pattern ignores any parts of a value that we haven't explicitly matched in the rest of the pattern. we have a Point struct that holds a coordinate in three-dimensional space. In the match expression, we want to operate only on the x coordinate and ignore the values in the y and z fields.
struct Point {
x: i32,
y: i32,
z: i32,
}
let origin = Point { x: 0, y: 0, z: 0 };
match origin {
Point { x, .. } => println!("x is {}", x),
}
We list the x value and then just include the .. pattern. This is quicker than having to list y: _ and z: _ , particularly when we're working with structs that have lots of fields in situations where only one or two fields are relevant.
Extra Conditionals with Match Guards
A match guard is an additional if condition, specified after the pattern in a match arm, that must also match for that arm to be chosen. Match guards are useful for expressing more complex ideas than a pattern alone allows.
The condition can use variables created in the pattern. shows a match where the first arm has the pattern Some(x) and also has a match guard of if x % 2 == 0 (which will be true if the number is even).
let num = Some(4);
match num {
Some(x) if x % 2 == 0 => println!("The number {} is even", x),
Some(x) => println!("The number {} is odd", x),
None => (),
}
This example will print The number 4 is even . When num is compared to the pattern in the first arm, it matches, because Some(4) matches Some(x) . Then the match guard checks whether the remainder of dividing x by 2 is equal to 0, and because it is, the first arm is selected.
If num had been Some(5) instead, the match guard in the first arm would have been false because the remainder of 5 divided by 2 is 1, which is not equal to 0. Rust would then go to the second arm, which would match because the second arm doesn't have a match guard and therefore matches any Some variant. There is no way to express the if x % 2 == 0 condition within a pattern, so the match guard gives us the ability to express this logic. The downside of this additional expressiveness is that the compiler doesn't try to check for exhaustiveness when match guard expressions are involved.
fn main() {
let x = Some(5);
let y = 10;
match x {
Some(50) => println!("Got 50"),
Some(n) if n == y => println!("Matched, n = {n}"),
_ => println!("Default case, x = {:?}", x),
}
println!("at the end: x = {:?}, y = {y}", x);
}
This code will now print Default case, x = Some(5) . The pattern in the second match arm doesn't introduce a new variable y that would shadow the outer y , meaning we can use the outer y in the match guard. Instead of specifying the pattern as Some(y) , which would have shadowed the outer y , we specify Some(n) . This creates a new variable n that doesn't shadow anything because there is no n variable outside the match .
@ Bindings
The at operator @ lets us create a variable that holds a value at the same time as we're testing that value for a pattern match. We want to test that a Message::Hello id field is within the range 3..=7 . We also want to bind the value to the variable id_variable so we can use it in the code associated with the arm. We could name this variable id , the same as the field, but for this example we'll use a different name.
enum Message {
Hello { id: i32 },
}
let msg = Message::Hello { id: 5 };
match msg {
Message::Hello {
id: id_variable @ 3..=7,
} => println!("Found an id in range: {}", id_variable),
Message::Hello { id: 10..=12 } => {
println!("Found an id in another range")
}
Message::Hello { id } => println!("Found some other id: {}", id),
}
This example will print Found an id in range: 5 . By specifying id_variable @ before the range 3..=7 , we're capturing whatever value matched the range while also testing that the value matched the range pattern.
In the second arm, where we only have a range specified in the pattern, the code associated with the arm doesn't have a variable that contains the actual value of the id field. The id field's value could have been 10, 11, or 12, but the code that goes with that pattern doesn't know which it is. The pattern code isn't able to use the value from the id field, because we haven't saved the id value in a variable.
Using @ lets us test a value and save it in a variable within one pattern.
To switch to unsafe Rust, use the unsafe keyword and then start a new block that holds the unsafe code. You can take five actions in unsafe Rust that you can't in safe Rust, which we call unsafe superpowers. Those superpowers include the ability to:
• Dereference a raw pointer • Call an unsafe function or method • Access or modify a mutable static variable • Implement an unsafe trait • Access fields of union s
It's important to understand that unsafe doesn't turn off the borrow checker or disable any other of Rust's safety checks: if you use a reference in unsafe code, it will still be checked. The unsafe keyword only gives you access to these five features that are then not checked by the compiler for memory safety. You'll still get some degree of safety inside of an unsafe block.
In addition, unsafe does not mean the code inside the block is necessarily dangerous or that it will definitely have memory safety problems: the intent is that as the programmer, you'll ensure the code inside an unsafe block will access memory in a valid way.
To isolate unsafe code as much as possible, it's best to enclose unsafe code within a safe abstraction and provide a safe API, which we'll discuss later in the chapter when we examine unsafe functions and methods. Parts of the standard library are implemented as safe abstractions over unsafe code that has been audited. Wrapping unsafe code in a safe abstraction prevents uses of unsafe from leaking out into all the places that you or your users might want to use the functionality implemented with unsafe code, because using a safe abstraction is safe.
Dereferencing a Raw Pointer
Unsafe Rust has two new types called raw pointers that are similar to references. As with references, raw pointers can be immutable or mutable and are written as *const T and *mut T , respectively. The asterisk isn't the dereference operator; it's part of the type name. In the context of raw pointers, immutable means that the pointer can't be directly assigned to after being dereferenced.
Different from references and smart pointers, raw pointers: • Are allowed to ignore the borrowing rules by having both immutable and mutable pointers or multiple mutable pointers to the same location • Aren't guaranteed to point to valid memory • Are allowed to be null • Don't implement any automatic cleanup
By opting out of having Rust enforce these guarantees, you can give up guaranteed safety in exchange for greater performance or the ability to interface with another language or hardware where Rust's guarantees don't apply.
let mut num = 5;
let r1 = &num as *const i32; // this is pointer location of num
let r2 = &mut num as *mut i32;
Notice that we don't include the unsafe keyword in this code. We can create raw pointers in safe code; we just can't dereference raw pointers outside an unsafe block, as you'll see in a bit.
Recall that we can create raw pointers in safe code, but we can't dereference raw pointers and read the data being pointed to. We use the dereference operator * on a raw pointer that requires an unsafe block.
unsafe {
println!("r1 is: {}", *r1);
println!("r2 is: {}", *r2);
}
With raw pointers, we can create a mutable pointer and an immutable pointer to the same location and change data through the mutable pointer, potentially creating a data race. Tome cuidado!
Calling an Unsafe Function or Method
The second type of operation you can perform in an unsafe block is calling unsafe functions. Unsafe functions and methods look exactly like regular functions and methods, but they have an extra unsafe before the rest of the definition.
unsafe fn dangerous() {}
unsafe {
dangerous();
}
Creating a Safe Abstraction over Unsafe Code
Just because a function contains unsafe code doesn't mean we need to mark the entire function as unsafe. In fact, wrapping unsafe code in a safe function is a common abstraction.
fn split_at_mut(values: &mut [i32], mid: usize) -> (&mut [i32], &mut [i32]) {
let len = values.len();
assert!(mid <= len);
(&mut values[..mid], &mut values[mid..])
}
We can't implement this function using only safe Rust.For simplicity, we'll implement split_at_mut as a function rather than a method and only for slices of i32 values rather than for a generic type T
Rust's borrow checker can't understand that we're borrowing different parts of the slice; it only knows that we're borrowing from the same slice twice. Borrowing different parts of a slice is fundamentally okay because the two slices aren't overlapping, but Rust isn't smart enough to know this. When we know code is okay, but Rust doesn't, it's time to reach for unsafe code.
use std::slice;
fn split_at_mut(values: &mut [i32], mid: usize) -> (&mut [i32], &mut [i32]) {
let len = values.len();
let ptr = values.as_mut_ptr();
assert!(mid <= len);
unsafe {
(
slice::from_raw_parts_mut(ptr, mid),
slice::from_raw_parts_mut(ptr.add(mid), len - mid),
)
}
}
Note that we don't need to mark the resulting split_at_mut function as unsafe , and we can call this function from safe Rust. We've created a safe abstraction to the unsafe code with an implementation of the function that uses unsafe code in a safe way, because it creates only valid pointers from the data this function has access to.
Using extern Functions to Call External Code
Sometimes, your Rust code might need to interact with code written in another language. For this, Rust has the keyword extern that facilitates the creation and use of a Foreign Function Interface (FFI) . An FFI is a way for a programming language to define functions and enable a different (foreign) programming language to call those functions.
Functions declared within extern blocks are always unsafe to call from Rust code. The reason is that other languages don't enforce Rust's rules and guarantees, and Rust can't check them, so responsibility falls on the programmer to ensure safety.
extern "C" {
fn abs(input: i32) -> i32;
}
fn main() {
unsafe {
println!("Absolute value of -3 according to C: {}", abs(-3));
}
}
Within the extern "C" block, we list the names and signatures of external functions from another language we want to call. The "C" part defines which application binary interface (ABI) the external function uses: the ABI defines how to call the function at the assembly level. The "C" ABI is the most common and follows the C programming language's ABI.
Calling Rust Functions from Other Languages
We can also use extern to create an interface that allows other languages to call Rust functions. Instead of an creating a whole extern block, we add the extern keyword and specify the ABI to use just before the fn keyword for the relevant function. We also need to add a #[no_mangle] annotation to tell the Rust compiler not to mangle the name of this function. Mangling is when a compiler changes the name we've given a function to a different name that contains more information for other parts of the compilation process to consume but is less human readable. Every programming language compiler mangles names slightly differently, so for a Rust function to be nameable by other languages, we must disable the Rust compiler's name mangling.
In the following example, we make the call_from_c function accessible from C code, after it's compiled to a shared library and linked from C:
#[no_mangle]
pub extern "C" fn call_from_c() {
println!("Just called a Rust function from C!");
}
This usage of extern does not require unsafe .
Accessing or Modifying a Mutable Static Variable
In this book, we've not yet talked about global variables, which Rust does support but can be problematic with Rust's ownership rules. If two threads are accessing the same mutable global variable, it can cause a data race. In Rust, global variables are called static variables.
static HELLO_WORLD: &str = "Hello, world!";
fn main() {
println!("name is: {}", HELLO_WORLD);
}
Static variables are similar to constants, The names of static variables are in SCREAMING_SNAKE_CASE by convention. Static variables can only store references with the 'static lifetime, which means the Rust compiler can figure out the lifetime and we aren't required to annotate it explicitly. Accessing an immutable static variable is safe.
A subtle difference between constants and immutable static variables is that values in a static variable have a fixed address in memory. Using the value will always access the same data. Constants, on the other hand, are allowed to duplicate their data whenever they're used. Another difference is that static variables can be mutable. Accessing and modifying mutable static variables is unsafe.
static mut COUNTER: u32 = 0;
fn add_to_count(inc: u32) {
unsafe {
COUNTER += inc;
}
}
fn main() {
add_to_count(3);
unsafe {
println!("COUNTER: {}", COUNTER);
}
}
With mutable data that is globally accessible, it's difficult to ensure there are no data races, which is why Rust considers mutable static variables to be unsafe. Where possible, it's preferable to use the concurrency techniques and thread-safe smart pointers we discusse later so the compiler checks that data accessed from different threads is done safely.
Implementing an Unsafe Trait
We can use unsafe to implement an unsafe trait . A trait is unsafe when at least one of its methods has some invariant that the compiler can't verify. We declare that a trait is unsafe by adding the unsafe keyword before trait and marking the implementation of the trait as unsafe too.
unsafe trait Foo {
// methods go here
}
unsafe impl Foo for i32 {
// method implementations go here
}
Accessing Fields of a Union
The final action that works only with unsafe is accessing fields of a union . A union is similar to a struct , but only one declared field is used in a particular instance at one time. Unions are primarily used to interface with unions in C code. Accessing union fields is unsafe because Rust can't guarantee the type of the data currently being stored in the union instance.
Specifying Placeholder Types in Trait Definitions with Associated Types
Associated types connect a type placeholder with a trait such that the trait method definitions can use these placeholder types in their signatures. The implementor of a trait will specify the concrete type to be used instead of the placeholder type for the particular implementation. That way, we can define a trait that uses some types without needing to know exactly what those types are until the trait is implemented.
pub trait Iterator {
type Item;
fn next(&mut self) -> Option<Self::Item>;
}
Associated types might seem like a similar concept to generics, in that the latter allow us to define a function without specifying what types it can handle. To examine the difference between the two concepts, we'll look at an implementation of the Iterator trait on a type named Counter that specifies the Item type is u32 :
impl Iterator for Counter {
type Item = u32;
fn next(&mut self) -> Option<Self::Item> {
// --snip-
This syntax seems comparable to that of generics. So why not just define the Iterator trait with generics,
The difference is that when using generics, we must annotate the types in each implementation; because we can also implement Iterator<String> for Counter or any other type, we could have multiple implementations of Iterator for Counter . In other words, when a trait has a generic parameter, it can be implemented for a type multiple times, changing the concrete types of the generic type parameters each time. When we use the next method on Counter , we would have to provide type annotations to indicate which implementation of Iterator we want to use.
With associated types, we don't need to annotate types because we can't implement a trait on a type multiple times. with the definition that uses associated types, we can only choose what the type of Item will be once, because there can only be one impl Iterator for Counter . We don't have to specify that we want an iterator of u32 values everywhere that we call next on Counter .
Associated types also become part of the trait's contract: implementors of the trait must provide a type to stand in for the associated type placeholder. Associated types often have a name that describes how the type will be used, and documenting the associated type in the API documentation is good practice.
Default Generic Type Parameters and Operator Overloading
When we use generic type parameters, we can specify a default concrete type for the generic type. This eliminates the need for implementors of the trait to specify a concrete type if the default type works. You specify a default type when declaring a generic type with the <PlaceholderType=ConcreteType> syntax.
A great example of a situation where this technique is useful is with operator overloading, in which you customize the behavior of an operator (such as + ) in particular situations.
Rust doesn't allow you to create your own operators or overload arbitrary operators. But you can overload the operations and corresponding traits listed in std::ops by implementing the traits associated with the operator. For example, we overload the + operator to add two Point instances together. We do this by implementing the Add trait on a Point struct:
use std::ops::Add;
#[derive(Debug, Copy, Clone, PartialEq)]
struct Point {
x: i32,
y: i32,
}
impl Add for Point {
type Output = Point;
fn add(self, other: Point) -> Point {
Point {
x: self.x + other.x,
y: self.y + other.y,
}
}
}
fn main() {
assert_eq!(
Point { x: 1, y: 0 } + Point { x: 2, y: 3 },
Point { x: 3, y: 3 }
);
}
trait Add<Rhs=Self> {
type Output;
fn add(self, rhs: Rhs) -> Self::Output;
}
This code should look generally familiar: a trait with one method and an associated type. The new part is Rhs=Self : this syntax is called default type parameters. The Rhs generic type parameter (short for “right hand side”) defines the type of the rhs parameter in the add method. If we don't specify a concrete type for Rhs when we implement the Add trait, the type of Rhs will default to Self , which will be the type we're implementing Add on.
When we implemented Add for Point , we used the default for Rhs because we wanted to add two Point instances.
Let's look at an example of implementing the Add trait where we want to customize the Rhs type rather than using the default.
We have two structs, Millimeters and Meters , holding values in different units. This thin wrapping of an existing type in another struct is known as the newtype pattern,
use std::ops::Add;
struct Millimeters(u32);
struct Meters(u32);
impl Add<Meters> for Millimeters {
type Output = Millimeters;
fn add(self, other: Meters) -> Millimeters {
Millimeters(self.0 + (other.0 * 1000))
}
}
To add Millimeters and Meters , we specify impl Add<Meters> to set the value of the Rhs type parameter instead of using the default of Self .
You'll use default type parameters in two main ways: • To extend a type without breaking existing code • To allow customization in specific cases most users won't need
Fully Qualified Syntax for Disambiguation: Calling Methods with the Same Name
Nothing in Rust prevents a trait from having a method with the same name as another trait's method, nor does Rust prevent you from implementing both traits on one type. It's also possible to implement a method directly on the type with the same name as methods from traits.
trait Pilot {
fn fly(&self);
}
trait Wizard {
fn fly(&self);
}
struct Human;
impl Pilot for Human {
fn fly(&self) {
println!("This is your captain speaking.");
}
}
impl Wizard for Human {
fn fly(&self) {
println!("Up!");
}
}
impl Human {
fn fly(&self) {
println!("*waving arms furiously*");
}
}
When we call fly on an instance of Human , the compiler defaults to calling the method that is directly implemented on the type,
fn main() {
let person = Human;
person.fly();
}
To call the fly methods from either the Pilot trait or the Wizard trait, we need to use more explicit syntax to specify which fly method we mean.
fn main() {
let person = Human;
Pilot::fly(&person);
Wizard::fly(&person);
person.fly();
}
Specifying the trait name before the method name clarifies to Rust which implementation of fly we want to call. We could also write Human::fly(&person) , which is equivalent to the person.fly() , but this is a bit longer to write if we don't need to disambiguate.
However, associated functions that are not methods don't have a self parameter. When there are multiple types or traits that define non-method functions with the same function name, Rust doesn't always know which type you mean unless you use fully qualified syntax.
trait Animal {
fn baby_name() -> String;
}
struct Dog;
impl Dog {
fn baby_name() -> String {
String::from("Spot")
}
}
impl Animal for Dog {
fn baby_name() -> String {
String::from("puppy")
}
}
fn main() {
println!("A baby dog is called a {}", Dog::baby_name());
}
This output isn't what we wanted. We want to call the baby_name function that is part of the Animal trait that we implemented on Dog so the code prints A baby dog is called a puppy . The technique of specifying the trait name that we used in example doesn't help here; if we change main to the code , we'll get a compilation error.
fn main() {
println!("A baby dog is called a {}", Animal::baby_name());
}
To disambiguate and tell Rust that we want to use the implementation of Animal for Dog as opposed to the implementation of Animal for some other type, we need to use fully qualified syntax.
fn main() {
println!("A baby dog is called a {}", <Dog as Animal>::baby_name());
}
implemented on Dog We're providing Rust with a type annotation within the angle brackets, which indicates we want to call the baby_name method from the Animal trait as implemented on Dog by saying that we want to treat the Dog type as an Animal for this function call. This code will now print what we want:
In general, fully qualified syntax is defined as follows:
<Type as Trait>::function(receiver_if_method, next_arg, ...);
For associated functions that aren't methods, there would not be a receiver : there would only be the list of other arguments. You could use fully qualified syntax everywhere that you call functions or methods. However, you're allowed to omit any part of this syntax that Rust can figure out from other information in the program. You only need to use this more verbose syntax in cases where there are multiple implementations that use the same name and Rust needs help to identify which implementation you want to call
Using Supertraits to Require One Trait's Functionality Within Another Trait
Sometimes, you might write a trait definition that depends on another trait: for a type to implement the first trait, you want to require that type to also implement the second trait. You would do this so that your trait definition can make use of the associated items of the second trait. The trait your trait definition is relying on is called a supertrait of your trait.
use std::fmt;
trait OutlinePrint: fmt::Display {
fn outline_print(&self) {
let output = self.to_string();
let len = output.len();
println!("{}", "*".repeat(len + 4));
println!("*{}*", " ".repeat(len + 2));
println!("* {} *", output);
println!("*{}*", " ".repeat(len + 2));
println!("{}", "*".repeat(len + 4));
}
}
Because we've specified that OutlinePrint requires the Display trait, we can use the to_string unction that is automatically implemented for any type that implements Display . If we tried to use to_string without adding a colon and specifying the Display trait after the trait name, we'd get an error saying that no method named to_string was found for the type &Self in the current scope.
Let's see what happens when we try to implement OutlinePrint on a type that doesn't implement Display , such as the Point struct:
struct Point {
x: i32,
y: i32,
}
impl OutlinePrint for Point {}
We get an error saying that Display is required but not implemented:
use std::fmt;
impl fmt::Display for Point {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "({}, {})", self.x, self.y)
}
}
Then implementing the OutlinePrint trait on Point will compile successfully, and we can call outline_print on a Point instance to display it within an outline of asterisks.
Using the Newtype Pattern to Implement External Traits on External Types
we mentioned the orphan rule that states we're only allowed to implement a trait on a type if either the trait or the type are local to our crate. It's possible to get around this restriction using the new type pattern, which involves creating a new type in a tuple struct.
The tuple struct will have one field and be a thin wrapper around the type we want to implement a trait for. Then the wrapper type is local to our crate, and we can implement the trait on the wrapper. New type is a term that originates from the Haskell programming language. There is no runtime performance penalty for using this pattern, and the wrapper type is elided at compile time.
use std::fmt;
struct Wrapper(Vec<String>);
impl fmt::Display for Wrapper {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "[{}]", self.0.join(", "))
}
}
fn main() {
let w = Wrapper(vec![String::from("hello"), String::from("world")]);
println!("w = {}", w);
}
The Rust type system has some features that we've so far mentioned but haven't yet discussed. We'll start by discussing new types in general as we examine why new types are useful as types. Then we'll move on to type aliases, a feature similar to newtypes but with slightly different semantics. We'll also discuss the ! type and dynamically sized types.
Creating Type Synonyms with Type Aliases
Rust provides the ability to declare a type alias to give an existing type another name. For this we use the type keyword. For example, we can create the alias Kilometers to i32 like so:
type Kilometers = i32;
Now, the alias Kilometers is a synonym for i32 , Kilometers is not a separate, new type. Values that have the type Kilometers will be treated the same as values of type i32 :
type Kilometers = i32;
let x: i32 = 5;
let y: Kilometers = 5;
println!("x + y = {}", x + y);
This code is much easier to read and write! Choosing a meaningful name for a type alias can help communicate your intent as well (thunk is a word for code to be evaluated at a later time, so it's an appropriate name for a closure that gets stored).
let f: Box<dyn Fn() + Send + 'static> = Box::new(|| println!("hi"));
fn takes_long_type(f: Box<dyn Fn() + Send + 'static>) {
// --snip--
}
fn returns_long_type() -> Box<dyn Fn() + Send + 'static> {
// --snip--
}
A type alias makes this code more manageable by reducing the repetition. We've introduced an alias named Thunk for the verbose type and can replace all uses of the type with the shorter alias Thunk .
type Thunk = Box<dyn Fn() + Send + 'static>;
let f: Thunk = Box::new(|| println!("hi"));
fn takes_long_type(f: Thunk) {
// --snip--
}
fn returns_long_type() -> Thunk {
// --snip--
}
The Never Type that Never Returns
Rust has a special type named ! that's known in type theory lingo as the empty type because it has no values. We prefer to call it the never type because it stands in the place of the return type when a function will never return. Aqui está um exemplo:
fn bar() -> ! {
// --snip--
}
This code is read as “the function bar returns never.” Functions that return never are called diverging functions . We can't create values of the type ! so bar can never possibly return.
Dynamically Sized Types and the Sized Trait
Rust needs to know certain details about its types, such as how much space to allocate for a value of a particular type. This leaves one corner of its type system a little confusing at first: the concept of dynamically sized types. Sometimes referred to as DSTs or unsized types, these types let us write code using values whose size we can know only at runtime.
Let's dig into the details of a dynamically sized type called str , which we've been using throughout the book. That's right, not &str , but str on its own, is a DST. We can't know how long the string is until runtime, meaning we can't create a variable of type str , nor can we take an argument of type str . Consider the following code, which does not work:
// ! ! Compile Error ! !
let s1: str = "Hello there!";
let s2: str = "How's it going?";
So although a &T is a single value that stores the memory address of where the T is located, a &str is two values: the address of the str and its length. As such, we can know the size of a &str value at compile time: it's twice the length of a usize . That is, we always know the size of a &str , no matter how long the string it refers to is. In general, this is the way in which dynamically sized types are used in Rust: they have an extra bit of metadata that stores the size of the dynamic information. The golden rule of dynamically sized types is that we must always put values of dynamically sized types behind a pointer of some kind.
We can combine str with all kinds of pointers: for example, Box<str> or Rc<str> .
To work with DSTs, Rust provides the Sized trait to determine whether or not a type's size is known at compile time. This trait is automatically implemented for everything whose size is known at compile time. In addition, Rust implicitly adds a bound on Sized to every generic function. That is, a generic function definition like this:
fn generic<T>(t: T) {
// --snip--
}
is actually treated as though we had written this:
fn generic<T: Sized>(t: T) {
// --snip--
}
By default, generic functions will work only on types that have a known size at compile time. However, you can use the following special syntax to relax this restriction:
fn generic<T: ?Sized>(t: &T) {
// --snip--
}
A trait bound on ?Sized means “ T may or may not be Sized ” and this notation overrides the default that generic types must have a known size at compile time. The ?Trait syntax with this meaning is only available for Sized , not any other traits.
Also note that we switched the type of the t parameter from T to &T . Because the type might not be Sized , we need to use it behind some kind of pointer. In this case, we've chosen a reference.
Function Pointers
The fn type is called a function pointer. Passing functions with function pointers will allow you to use functions as arguments to other functions.
The syntax for specifying that a parameter is a function pointer is similar to that of closures, where we've defined a function add_one that adds one to its parameter. The function do_twice takes two parameters: a function pointer to any function that takes an i32 parameter and returns an i32 , and one i32 value . The do_twice function calls the function f twice, passing it the arg value, then adds the two function call results together. The main function calls do_twice with the arguments add_one and 5 .
fn add_one(x: i32) -> i32 {
x + 1
}
fn do_twice(f: fn(i32) -> i32, arg: i32) -> i32 {
f(arg) + f(arg)
}
fn main() {
let answer = do_twice(add_one, 5);
println!("The answer is: {}", answer);
}
This code prints The answer is: 12 . We specify that the parameter f in do_twice is an fn that takes one parameter of type i32 and returns an i32 . We can then call f in the body of do_twice .
In main , we can pass the function name add_one as the first argument to do_twice . Unlike closures, fn is a type rather than a trait, so we specify fn as the parameter type directly rather than declaring a generic type parameter with one of the Fn traits as a trait bound.
Function pointers implement all three of the closure traits ( Fn , FnMut , and FnOnce ), meaning you can always pass a function pointer as an argument for a function that expects a closure. It's best to write functions using a generic type and one of the closure traits so your functions can accept either functions or closures.
We've used macros like println! throughout this book, but we haven't fully explored what a macro is and how it works. The term macro refers to a family of features in Rust: declarative macros with macro_rules! and three kinds of procedural macros:
• Custom #[derive] macros that specify code added with the derive attribute used on structs and enums • Attribute-like macros that define custom attributes usable on any item • Function-like macros that look like function calls but operate on the tokens specified as their argument
The Difference Between Macros and Functions Fundamentally, macros are a way of writing code that writes other code, which is known as meta programming. In Appendix C, we discuss the derive attribute, which generates an implementation of various traits for you. We've also used the println! and vec! Macros throughout the book. All of these macros expand to produce more code than the code you've written manually.
Meta programming is useful for reducing the amount of code you have to write and maintain, which is also one of the roles of functions. However, macros have some additional powers that functions don't.
A function signature must declare the number and type of parameters the function has. Macros, on the other hand, can take a variable number of parameters: we can call println! ("hello") with one argument or println!("hello {}", name) with two arguments. Also, macros are expanded before the compiler interprets the meaning of the code, so a macro can, for example, implement a trait on a given type. A function can't, because it gets called at runtime and a trait needs to be implemented at compile time.
The downside to implementing a macro instead of a function is that macro definitions are more complex than function definitions because you're writing Rust code that writes Rust code. Due to this indirection, macro definitions are generally more difficult to read, understand, and maintain than function definitions.
Another important difference between macros and functions is that you must define macros or bring them into scope before you call them in a file, as opposed to functions you can define anywhere and call anywhere.
Declarative Macros with macro_rules! for General Metaprogramming
The most widely used form of macros in Rust is the declarative macro. These are also sometimes referred to as “macros by example,” “ macro_rules! macros,” or just plain “macros.” At their core, declarative macros allow you to write something similar to a Rust match expression. Match expressions are control structures that take an expression, compare the resulting value of the expression to patterns, and then run the code associated with the matching pattern. Macros also compare a value to patterns that are associated with particular code: in this situation, the value is the literal Rust source code passed to the macro; the patterns are compared with the structure of that source code; and the code associated with each pattern, when matched, replaces the code passed to the macro. This all happens during compilation.
#[macro_export]
macro_rules! vec {
( $( $x:expr ),* ) => {
{
let mut temp_vec = Vec::new();
$(
temp_vec.push($x);
)*
temp_vec
}
};
}
The #[macro_export] annotation indicates that this macro should be made available whenever the crate in which the macro is defined is brought into scope. Without this annotation, the macro can't be brought into scope.
We then start the macro definition with macro_rules! and the name of the macro we're definingwithout the exclamation mark.
The structure in the vec! body is similar to the structure of a match expression. Here we have one arm with the pattern ( $( $x:expr ),* ) , followed by => and the block of code associated with this pattern. If the pattern matches, the associated block of code will be emitted. Given that this is the only pattern in this macro, there is only one valid way to match; any other pattern will result in an error. More complex macros will have more than one arm.
First, we use a set of parentheses to encompass the whole pattern. We use a dollar sign $ to declare a variable in the macro system that will contain the Rust code matching the pattern. The dollar sign makes it clear this is a macro variable as opposed to a regular Rust variable. Next comes a set of parentheses that captures values that match the pattern within the parentheses for use in the replacement code. Within $() is $x:expr , which matches any Rust expression and gives the expression the name $x .
The comma following $() indicates that a literal comma separator character could optionally appear after the code that matches the code in $() . The specifies that the pattern matches zero or more of whatever precedes the .
When we call this macro with vec![1, 2, 3]; , the $x pattern matches three times with the three expressions 1 , 2 , and 3 .
Now let's look at the pattern in the body of the code associated with this arm: temp_vec.push() within $()* is generated for each part that matches $() in the pattern zero or more times depending on how many times the pattern matches. The $x is replaced with each expression matched.
Procedural Macros for Generating Code from Attributes
The second form of macros is the procedural macro, which acts more like a function (and is a type of procedure). Procedural macros accept some code as an input, operate on that code, and produce some code as an output rather than matching against patterns and replacing the code with other code as declarative macros do. The three kinds of procedural macros are custom derive, attribute- like, and function-like, and all work in a similar fashion.
When creating procedural macros, the definitions must reside in their own crate with a special crate type. This is for complex technical reasons that we hope to eliminate in the future.
use proc_macro;
#[some_attribute]
pub fn some_name(input: TokenStream) -> TokenStream {}
The function that defines a procedural macro takes a TokenStream as an input and produces a TokenStream as an output. The TokenStream type is defined by the proc_macro crate that is included with Rust and represents a sequence of tokens. This is the core of the macro: the source code that the macro is operating on makes up the input TokenStream , and the code the macro produces is the output TokenStream . The function also has an attribute attached to it that specifies which kind of procedural macro we're creating. We can have multiple kinds of procedural macros in the same crate.
How to Write a Custom derive Macro
Let's create a crate named hello_macro that defines a trait named HelloMacro with one associated function named hello_macro . Rather than making our users implement the HelloMacro trait for each of their types, we'll provide a procedural macro so users can annotate their type with #[derive(HelloMacro)] to get a default implementation of the hello_macro function. The default implementation will print Hello, Macro! My name is TypeName! where TypeName is the name of the type on which this trait has been defined.
use hello_macro::HelloMacro;
use hello_macro_derive::HelloMacro;
#[derive(HelloMacro)]
struct Pancakes;
fn main() {
Pancakes::hello_macro();
}
Attribute-like macros
Attribute-like macros are similar to custom derive macros, but instead of generating code for the derive attribute, they allow you to create new attributes. They're also more flexible: derive only works for structs and enums; attributes can be applied to other items as well, such as functions.
Here's an example of using an attribute-like macro: say you have an attribute named route that annotates functions when using a web application framework:
#[route(GET, "/")]
fn index() {
his #[route] attribute would be defined by the framework as a procedural macro. The signature of the macro definition function would look like this:
#[proc_macro_attribute]
pub fn route(attr: TokenStream, item: TokenStream) -> TokenStream {
Here, we have two parameters of type TokenStream . The first is for the contents of the attribute: the GET, "/" part. The second is the body of the item the attribute is attached to: in this case, fn index() {} and the rest of the function's body
Function-like macros
Function-like macros define macros that look like function calls. Similarly to macro_rules! Macros, they're more flexible than functions; for example, they can take an unknown number of arguments. However, macro_rules! macros can be defined only using the match-like syntax we discussed in the section “Declarative Macros with macro_rules! for General Metaprogramming” earlier. Function-like macros take a TokenStream parameter and their definition manipulates that TokenStream using
Rust code as the other two types of procedural macros do. An example of a function-like macro is an sql! macro that might be called like so:
let sql = sql!(SELECT * FROM posts WHERE id=1);
This macro would parse the SQL statement inside it and check that it's syntactically correct, which is much more complex processing than a macro_rules! macro can do. The sql! macro would be defined like this:
#[proc_macro]
pub fn sql(input: TokenStream) -> TokenStream {
• as - perform primitive casting, disambiguate the specific trait containing an item, or rename items in use statements • async - return a Future instead of blocking the current thread • await - suspend execution until the result of a Future is ready • break - exit a loop immediately • const - define constant items or constant raw pointers • continue - continue to the next loop iteration • crate - in a module path, refers to the crate root • dyn - dynamic dispatch to a trait object • else - fallback for if and if let control flow constructs • enum - define an enumeration • extern - link an external function or variable • false - Boolean false literal • fn - define a function or the function pointer type • for - loop over items from an iterator, implement a trait, or specify a higher-ranked lifetime • if - branch based on the result of a conditional expression • impl - implement inherent or trait functionality • in - part of for loop syntax • let - bind a variable • loop - loop unconditionally • match - match a value to patterns • mod - define a module • move - make a closure take ownership of all its captures • mut - denote mutability in references, raw pointers, or pattern bindings • pub - denote public visibility in struct fields, impl blocks, or modules • ref - bind by reference • return - return from function • Self - a type alias for the type we are defining or implementing • self - method subject or current module • static - global variable or lifetime lasting the entire program execution • struct - define a structure • super - parent module of the current module • trait - define a trait • true - Boolean true literal • type - define a type alias or associated type • union - define a union; is only a keyword when used in a union declaration • unsafe - denote unsafe code, functions, traits, or implementations • use - bring symbols into scope • where - denote clauses that constrain a type • while - loop conditionally based on the result of an expression
Keywords Reserved for Future Use • abstract • become • box • do • final • macro • override • priv • try • typeof • unsized • virtual • yield
Raw identifiers are the syntax that lets you use keywords where they wouldn't normally be allowed. You use a raw identifier by prefixing a keyword with r#
fn r#match(needle: &str, haystack: &str) -> bool {
haystack.contains(needle)
}
fn main() {
assert!(r#match("foo", "foobar"));
}
This code will compile without any errors. Note the r# prefix on the function name in its definition as well as where the function is called in main .
| Operador | Exemplo | Explicação | Overloadable? |
|---|---|---|---|
| ! | ident!(...) , | ||
| ident!{...} , | Expansão macro | ||
| ident![...] | |||
| ! | !expr | Bitwise or logical complement | Não |
| != | expr != expr | Nonequality comparison | PartialEq |
| % | expr % expr | Arithmetic remainder | Rem |
| %= | var %= expr | Arithmetic remainder and | |
| atribuição | RemAssign | ||
| & | &expr , &mut expr | Emprestar | |
| & | &type , &mut type, | Borrowed pointer type | |
| &'a type , &'a mut | |||
| tipo | |||
| & | expr & expr | Bitwise AND | BitAnd |
| &= | var &= expr | Bitwise AND and assignment | BitAndAssign |
| && | expr && expr | Short-circuiting logical AND | |
| * | expr * expr | Arithmetic multiplication | Mul |
| *= | var *= expr | Arithmetic multiplication and assignment | MulAssign |
| * | *expr | Dereference | Deref |
| * | *const type , *mut type | Raw pointer | |
| + | trait + trait , 'a + trait | Compound type constraint | |
| + | expr + expr | Arithmetic addition | Adicionar |
| += | var += expr | Arithmetic addition and assignment | AddAssign |
| , Assim, | expr, expr | Argument and element separator | |
| - | - expr | Arithmetic negation | Neg |
| - | expr – expr | Arithmetic subtraction | Sub |
| -= | var -= expr | Arithmetic subtraction and assignment | SubAssign |
| → | fn(...) -> type | Function and closure return type | |
| . | expr.ident | Member access | |
| .. | .. , expr.. , ..expr , expr..expr | Right-exclusive range literal | PartialOrd |
| ..= | ..=expr , expr..=expr | Right-inclusive range literal | PartialOrd |
| .. | ..expr | Struct literal update syntax | |
| .. | variant(x, ..) ,struct_type { x, .. } | “And the rest” pattern binding | |
| ... | expr...expr | (Deprecated, use ..= instead) In a pattern: inclusive range pattern | |
| / | expr / expr | Arithmetic division | Div |
| /= | var /= expr | Arithmetic division and assignment | DivAssign |
| : | pat: type , ident: type | Restrições | |
| : | ident: expr | Struct field initializer | |
| : | 'a: loop {…} | Loop label | |
| ; | expr; | Statement and item terminator | |
| ; | [...; len] | Part of fixed-size array syntax | |
| << | expr << expr | Left-shift | Shl |
| <<= | var <<= expr | Left-shift and assignment | ShlAssign |
| < | expr < expr | Less than comparison | PartialOrd |
| <= | expr <= expr | Less than or equal to comparison | PartialOrd |
| = | var = expr , ident = type | Assignment/equivalence | |
| == | expr == expr | Equality comparison | PartialEq |
| => | pat => expr | Part of match arm syntax | |
| > | expr > expr | Greater than comparison | PartialOrd |
| >= | expr >= expr | Greater than or equal to comparison | PartialOrd |
| >> | expr >> expr | Right-shift | Shr |
| >>= | var >>= expr | Right-shift and assignment | ShrAssign |
| @ | ident @ pat | Pattern binding | |
| ^ | expr ^ expr | Bitwise exclusive OR | BitXor |
| ^= | var ^= expr | Bitwise exclusive OR and assignment | BitXorAssign |
| ` | ` | pat ` | ` pat |
| ` | ` | `expr | expr` |
| ` | =` | `var | = expr` |
| ` | ` | `expr | |
| ? | expr? | Error propagation |
The following list contains all symbols that don't function as operators; that is, they don't behave like a function or method call.
| Símbolo | Explicação |
|---|---|
| 'ident | Named lifetime or loop label |
| ...u8 , ...i32 , ...f64 , ...usize , etc. | Numeric literal of specific type |
| "…" | String literal |
| r"..." , r#"..."# , r##"..."## , etc. | Raw string literal, escape characters not processed |
| b"…" | Byte string literal; constructs an array of bytes instead of a string |
| br"..." , br#"..."# , br##"..."## , etc. | Raw byte string literal, combination of raw and byte string literal |
| '…' | Character literal |
| b'…' | ASCII byte literal |
| ` | ... |
| ! | Always empty bottom type for diverging functions |
| _ | “Ignored” pattern binding; also used to make integer literals readable |
| Símbolo | Explicação |
|---|---|
| ident::ident | Namespace path |
| ::caminho | Path relative to the crate root (ie, an explicitly absolute path) |
| self::path | Path relative to the current module (ie, an explicitly relative path). |
| super::path | Path relative to the parent of the current module |
| type::ident , <'type as trait>::ident | Associated constants, functions, and types |
| <'type>::… | Associated item for a type that cannot be directly named (eg, <&T>::... , <[T]>::... , etc.) |
| trait::method(…) | Disambiguating a method call by naming the trait that defines it |
| type::method(…) | Disambiguating a method call by naming the type for which it's defined |
| <'type as trait>::method(...) | Disambiguating a method call by naming the trait and type |
Generics
| Símbolo | Explicação |
|---|---|
| path<...> | Specifies parameters to generic type in a type (eg, Vec ) |
| path::<...> , | Specifies parameters to generic type, function, or method in an |
| method::<...> | expressão; often referred to as turbofish (eg, "42".parse::<'i32>() ) |
| fn ident<...> … | Define generic function |
| struct ident<...> ... | Define generic structure |
| enum ident<...> … | Define generic enumeration |
| impl<...> … | Define generic implementation |
| for<...> | type Higher-ranked lifetime bounds |
| type<ident=type> | A generic type where one or more associated types have specific assignments (eg, Iterator<Item=T> ) |
Trait Bound Constraints
| Símbolo | Explicação |
|---|---|
| T: U | Generic parameter T constrained to types that implement U |
| T: 'a | Generic type T must outlive lifetime 'a (meaning the type cannot transitively contain any references with lifetimes shorter than 'a ) |
| T: 'static | Generic type T contains no borrowed references other than 'static ones |
| 'b: 'a | Generic lifetime 'b must outlive lifetime 'a |
| T: ?Sized | Allow generic type parameter to be a dynamically sized type |
| 'a + trait , trait + trait | Compound type constraint |
Macros and Attributes
| Símbolo | Explicação |
|---|---|
| #[meta] | Outer attribute |
| #![meta] | Inner attribute |
| $ident | Macro substitution |
| $ident:kind | Macro capture |
| $(…)…Macro repetition ident!(...) , ident!{...} , ident![...] | Macro invocation |
Comentários
| Símbolo | Explicação |
|---|---|
| // | Line comment |
| //! | Inner line doc comment |
| /// | Outer line doc comment |
| / ... / | Block comment |
| / !... / | Inner block doc comment |
| /**...*/ | Outer block doc comment |
Tuples
| Símbolo | Explicação |
|---|---|
| () | Empty tuple (aka unit), both literal and type |
| (expr) | Parenthesized expression |
| (expr,) | Single-element tuple expression |
| (tipo,) | Single-element tuple type |
| (expr, …) | Tuple expression |
| (tipo, …) | Tuple type |
| expr(expr, …) | Function call expression; also used to initialize tuple struct s and tuple enum variants |
| expr.0 , expr.1 , etc. | Tuple indexing |
Square Brackets
| Contexto | Explicação |
|---|---|
| […] | Array literal |
| [expr; len] | Array literal containing len copies of expr |
| [tipo; len] | Array type containing len instances of type |
| expr[expr] | Collection indexing. Overloadable ( Index , IndexMut ) |
| expr[..] , expr[a..], | Collection indexing pretending to be collection slicing, using |
| expr[..b] , expr[a..b] | Range , RangeFrom , RangeTo , or RangeFull as the “index” |