乐闻世界logo
搜索文章和话题

How to get the contents of a webpage in a shell variable?

1个答案

1

The commonly used tools for retrieving web page content in shell scripts are curl or wget. Both commands can download web pages or API content from the command line and store it in variables. Here are the steps to store web page content in shell variables using these tools:

Using the curl Command

curl is a commonly used command-line tool for transferring data from servers. It supports various protocols, including HTTP, HTTPS, etc. To assign web page content to a shell variable, use the following command:

bash
content=$(curl -s http://example.com) echo "$content"

Here, the -s option suppresses progress bars and error messages during execution. http://example.com is the URL of the web page you want to download.

Using the wget Command

wget is also a widely used free network tool for downloading files. Unlike curl, wget is specifically designed for downloading content, while curl offers more features. To assign web page content to a variable, use the following command:

bash
content=$(wget -qO- http://example.com) echo "$content"

Here, -q indicates quiet mode, suppressing download progress and error messages. -O- directs the downloaded content to standard output.

Example Application

Suppose we need to retrieve content from a weather forecast API and parse certain data. Using curl, it can be done as follows:

bash
weather=$(curl -s "http://api.weatherapi.com/v1/current.json?key=your API key&q=NewYork") temperature=$(echo $weather | jq '.current.temp_c') echo "Current temperature in New York is $temperature °C"

Here, the jq tool is used to parse JSON content and extract the temperature data.

In summary, using curl or wget allows you to easily retrieve web page content in shell scripts and process the data further using various text processing tools.

2024年7月30日 00:21 回复

你的答案