feat: 使用智谱GLM-4-Flash生成中文标题

- 替换 Google Translate 为智谱 GLM-4-Flash API
- LLM 总结论文/项目核心内容生成中文标题
- 标题简洁有力,突出技术亮点
- 简化输出格式:标题 + 链接(无摘要)
- 添加翻译进度显示
This commit is contained in:
bojunc 2026-02-28 00:09:07 +08:00
parent 28c290966f
commit 37bd458c0b
4 changed files with 203 additions and 34 deletions

View File

@ -1,5 +1,5 @@
{
"lastUpdate": "2026-02-27T15:43:08.320Z",
"lastUpdate": "2026-02-27T16:08:50.053Z",
"urls": [
"http://arxiv.org/abs/2602.23360v1",
"http://arxiv.org/abs/2602.23359v1",

67
daily/2026-02-28_en.md Normal file
View File

@ -0,0 +1,67 @@
# AI Daily Brief - 2026-02-28
> Collected at: 2/28/2026, 12:08:50 AM
> Total items: 131
## 🔥 Top 10 Highlights
1. sponsors/muratcankoylan
https://github.com/sponsors/muratcankoylan
2. login?return_to=%2Fruvnet%2Fclaude-flow
https://github.com/login?return_to=%2Fruvnet%2Fclaude-flow
3. Search More, Think Less: Rethinking Long-Horizon Agentic Search for Efficiency and Generalization
https://huggingface.co/papers/2602.22675
4. AgentDropoutV2: Optimizing Information Flow in Multi-Agent Systems via Test-Time Rectify-or-Reject Pruning
https://huggingface.co/papers/2602.23258
5. Accelerating Diffusion via Hybrid Data-Pipeline Parallelism Based on Conditional Guidance Scheduling
https://huggingface.co/papers/2602.21760
6. Exploratory Memory-Augmented LLM Agent via Hybrid On- and Off-Policy Optimization
https://huggingface.co/papers/2602.23008
7. login?return_to=%2Fruvnet%2Fwifi-densepose
https://github.com/login?return_to=%2Fruvnet%2Fwifi-densepose
8. login?return_to=%2Fbytedance%2Fdeer-flow
https://github.com/login?return_to=%2Fbytedance%2Fdeer-flow
9. login?return_to=%2Fmoonshine-ai%2Fmoonshine
https://github.com/login?return_to=%2Fmoonshine-ai%2Fmoonshine
10. sponsors/obra
https://github.com/sponsors/obra
## 📂 Categories
### Agent Frameworks
- Toward Expert Investment Teams:A Multi-Agent LLM System with Fine-Grained Trading Tasks
http://arxiv.org/abs/2602.23330v1
- AgentDropoutV2: Optimizing Information Flow in Multi-Agent Systems via Test-Time Rectify-or-Reject Pruning
http://arxiv.org/abs/2602.23258v1
### AI Infrastructure / Inference Optimization
- Bitwise Systolic Array Architecture for Runtime-Reconfigurable Multi-precision Quantized Multiplication on Hardware Accelerators
http://arxiv.org/abs/2602.23334v1
- Invariant Transformation and Resampling based Epistemic-Uncertainty Reduction
http://arxiv.org/abs/2602.23315v1
- Agency and Architectural Limits: Why Optimization-Based Systems Cannot Be Norm-Responsive
http://arxiv.org/abs/2602.23239v1
- InnerQ: Hardware-aware Tuning-free Quantization of KV Cache for Large Language Models
http://arxiv.org/abs/2602.23200v1
- Assessing Deanonymization Risks with Stylometry-Assisted LLM Agent
http://arxiv.org/abs/2602.23079v1
- Rejection Mixing: Fast Semantic Propagation of Mask Tokens for Efficient DLLM Inference
http://arxiv.org/abs/2602.22868v1
- Differentiable Zero-One Loss via Hypersimplex Projections
http://arxiv.org/abs/2602.23336v1
- FairQuant: Fairness-Aware Mixed-Precision Quantization for Medical Image Classification
http://arxiv.org/abs/2602.23192v1
---
*Generated by AINewsCollector*

67
daily/2026-02-28_zh.md Normal file
View File

@ -0,0 +1,67 @@
# AI Daily Brief - 2026-02-28
> 采集时间: 2026/2/28 00:08:24
> 总条目: 131
## 🔥 Top 10 重要消息
1. 《构建、优化与调试智能体系统:全面智能体技能集》
https://github.com/sponsors/muratcankoylan
2. Claude智能多代理编排平台构建企业级对话AI系统
https://github.com/login?return_to=%2Fruvnet%2Fclaude-flow
3. “深度搜索,少思多行:重思长周期智能搜索以提升效率和泛化能力”
https://huggingface.co/papers/2602.22675
4. 多智能体系统信息流优化AgentDropoutV2测试时剪枝技术
https://huggingface.co/papers/2602.23258
5. 基于条件指导调度的混合数据管道并行加速扩散模型
https://huggingface.co/papers/2602.21760
6. 混合策略强化学习:探索性记忆增强大型语言模型代理
https://huggingface.co/papers/2602.23008
7. 基于WiFi的颠覆性全身姿态估计系统InvisPose实时穿墙追踪
https://github.com/login?return_to=%2Fruvnet%2Fwifi-densepose
8. 开源SuperAgent工具融合沙箱、记忆、工具、技能与子代理高效处理多级任务
https://github.com/login?return_to=%2Fbytedance%2Fdeer-flow
9. 边缘设备快速精准语音识别技术突破
https://github.com/login?return_to=%2Fmoonshine-ai%2Fmoonshine
10. 构建智能体技能框架与软件开发方法论新范式
https://github.com/sponsors/obra
## 📂 分类汇总
### Agent 框架
- 构建专家级投资团队细粒度交易任务的多智能体LLM系统
http://arxiv.org/abs/2602.23330v1
- 多智能体系统信息流优化AgentDropoutV2测试时剪枝技术
http://arxiv.org/abs/2602.23258v1
### AI 基础设施 / 推理优化
- 硬件加速器上基于位运算的运行时重构多精度量化乘法阵列架构
http://arxiv.org/abs/2602.23334v1
- 基于不变变换与重采样减少认知不确定性的AI模型
http://arxiv.org/abs/2602.23315v1
- 基于规范响应的优化系统无法适应机构与架构限制
http://arxiv.org/abs/2602.23239v1
- 硬件感知无调优量化KV缓存优化大语言模型解码效率
http://arxiv.org/abs/2602.23200v1
- 利用文体学辅助LLM代理评估匿名化风险
http://arxiv.org/abs/2602.23079v1
- 拒绝混合高效DLLM推理中掩码标记快速语义传播技术
http://arxiv.org/abs/2602.22868v1
- 基于超单纯形投影的可微分0-1损失
http://arxiv.org/abs/2602.23336v1
- 公平量化:医疗图像分类的公平感知混合精度量化
http://arxiv.org/abs/2602.23192v1
---
*Generated by AINewsCollector*

View File

@ -16,43 +16,69 @@ const DAILY_DIR = path.join(__dirname, '../../daily');
// 代理配置
const PROXY_URL = process.env.HTTP_PROXY || process.env.HTTPS_PROXY || 'http://127.0.0.1:7890';
// 智谱 AI API 配置
const ZHIPU_API = 'https://open.bigmodel.cn/api/paas/v4/chat/completions';
const ZHIPU_KEY = process.env.ZHIPU_KEY || '64536e2512184e36afaa08a057f6879c.o7hCohyniLdPF2Xn';
const ZHIPU_MODEL = 'glm-4-flash';
// 翻译缓存
const translateCache = new Map();
// 使用 Google Translate API 翻译文本
function translateToChinese(text) {
if (!text || text.length === 0) return text;
// 检查缓存
if (translateCache.has(text)) {
return translateCache.get(text);
// 使用 LLM 总结并翻译为中文标题
function summarizeToChinese(title, summary) {
const cacheKey = `${title}|||${summary}`;
if (translateCache.has(cacheKey)) {
return translateCache.get(cacheKey);
}
try {
const prompt = `请将以下 AI 论文/项目信息总结为一句话中文标题30字以内突出核心贡献或创新点
标题${title}
摘要${summary || '无'}
要求
1. 只输出翻译后的标题不要其他内容
2. 标题要简洁有力突出技术亮点
3. 使用专业术语的中文译名`;
const proxyFlag = PROXY_URL ? `--proxy "${PROXY_URL}"` : '';
const encodedText = encodeURIComponent(text.slice(0, 500)); // 限制长度
const url = `https://translate.googleapis.com/translate_a/single?client=gtx&sl=en&tl=zh-CN&dt=t&q=${encodedText}`;
const requestBody = JSON.stringify({
model: ZHIPU_MODEL,
messages: [
{ role: 'system', content: '你是一个AI技术专家擅长总结论文和项目核心内容。' },
{ role: 'user', content: prompt }
],
max_tokens: 100,
temperature: 0.3
});
// 写入临时文件避免命令行转义问题
const tmpFile = `/tmp/zhipu_request_${Date.now()}.json`;
fs.writeFileSync(tmpFile, requestBody);
const result = execSync(
`curl -s ${proxyFlag} -L --max-time 10 "${url}"`,
{ encoding: 'utf8', timeout: 15000 }
`curl -s ${proxyFlag} -X POST "${ZHIPU_API}" -H "Content-Type: application/json" -H "Authorization: Bearer ${ZHIPU_KEY}" -d @${tmpFile}`,
{ encoding: 'utf8', timeout: 30000, maxBuffer: 1024 * 1024 }
);
// 清理临时文件
try { fs.unlinkSync(tmpFile); } catch (e) {}
const json = JSON.parse(result);
// 解析翻译结果
let translated = '';
if (Array.isArray(json) && Array.isArray(json[0])) {
for (const part of json[0]) {
if (part && part[0]) translated += part[0];
}
let translated = json.choices?.[0]?.message?.content?.trim() || title;
// 清理可能的多余内容
translated = translated.split('\n')[0].trim();
if (translated.length > 60) {
translated = translated.slice(0, 57) + '...';
}
const finalText = translated || text;
translateCache.set(text, finalText);
return finalText;
translateCache.set(cacheKey, translated);
return translated;
} catch (err) {
// 翻译失败,返回原
return text;
// 翻译失败,返回原标题
return title;
}
}
@ -216,17 +242,26 @@ function generateMarkdownZH(items, topCount, topics, date) {
md += `> 总条目: ${items.length}\n\n`;
md += `## 🔥 Top ${topCount} 重要消息\n\n`;
console.log(` 翻译 Top ${topCount}...`);
for (let i = 0; i < top10.length; i++) {
const item = top10[i];
const titleZH = translateToChinese(item.title);
const summaryZH = item.summary ? translateToChinese(item.summary.slice(0, 200)) : '';
md += `${i + 1}. [${titleZH}](${item.url}) - **${item.source}**\n`;
if (summaryZH) md += ` > ${summaryZH.slice(0, 150)}${summaryZH.length > 150 ? '...' : ''}\n`;
md += '\n';
process.stdout.write(` [${i + 1}/${top10.length}] `);
const titleZH = summarizeToChinese(item.title, item.summary);
md += `${i + 1}. ${titleZH}\n ${item.url}\n\n`;
}
md += `## 📂 分类汇总\n\n`;
let totalCategoryItems = 0;
for (const topic of topics) {
const topicItems = items.filter(item => {
const text = `${item.title} ${item.summary}`.toLowerCase();
return topic.keywords.some(k => text.includes(k.toLowerCase()));
}).filter(item => !top10.includes(item));
totalCategoryItems += topicItems.length;
}
let processedCount = 0;
for (const topic of topics) {
const topicItems = items.filter(item => {
const text = `${item.title} ${item.summary}`.toLowerCase();
@ -236,8 +271,10 @@ function generateMarkdownZH(items, topCount, topics, date) {
if (topicItems.length > 0) {
md += `### ${topic.name}\n\n`;
for (const item of topicItems.slice(0, 10)) {
const titleZH = translateToChinese(item.title);
md += `- [${titleZH}](${item.url}) - ${item.source}\n`;
processedCount++;
process.stdout.write(` [${processedCount}/${totalCategoryItems}] `);
const titleZH = summarizeToChinese(item.title, item.summary);
md += `- ${titleZH}\n ${item.url}\n`;
}
md += '\n';
}
@ -258,9 +295,7 @@ function generateMarkdownEN(items, topCount, topics, date) {
md += `## 🔥 Top ${topCount} Highlights\n\n`;
for (let i = 0; i < top10.length; i++) {
const item = top10[i];
md += `${i + 1}. [${item.title}](${item.url}) - **${item.source}**\n`;
if (item.summary) md += ` > ${item.summary.slice(0, 150)}${item.summary.length > 150 ? '...' : ''}\n`;
md += '\n';
md += `${i + 1}. ${item.title}\n ${item.url}\n\n`;
}
md += `## 📂 Categories\n\n`;
@ -282,7 +317,7 @@ function generateMarkdownEN(items, topCount, topics, date) {
const topicNameEN = topicNamesEN[topic.name] || topic.name;
md += `### ${topicNameEN}\n\n`;
for (const item of topicItems.slice(0, 10)) {
md += `- [${item.title}](${item.url}) - ${item.source}\n`;
md += `- ${item.title}\n ${item.url}\n`;
}
md += '\n';
}